N e w s - F e e d s

[ Reuters | Slashdot | BBC News ]
[ Image Archive ]

Slashdot

    - Grammarly Disables Tool Offering Generative-AI Feedback Credited To Real Writers
    Grammarly has disabled its Expert Review feature after backlash from writers whose names were used to present AI-generated feedback without their permission. Superhuman (formerly Grammarly) CEO Shishir Mehrotra wrote in a LinkedIn post that the company will disable Expert Review while they "reimagine" the feature: Back in August, we launched a Grammarly agent called Expert Review. The agent draws on publicly available information from third-party LLMs to surface writing suggestions inspired by the published work of influential voices. Over the past week, we received valid critical feedback from experts who are concerned that the agent misrepresented their voices. This kind of scrutiny improves our products, and we take it seriously. As context, the agent was designed to help users discover influential perspectives and scholarship relevant to their work, while also providing meaningful ways for experts to build deeper relationships with their fans. We hear the feedback and recognize we fell short on this. I want to apologize and acknowledge that we'll rethink our approach going forward. After careful consideration, we have decided to disable Expert Review while we reimagine the feature to make it more useful for users, while giving experts real control over how they want to be represented -- or not represented at all. We deeply believe in our mission to solve the "last mile of AI" by bringing AI directly to where people work, and we see this as a significant opportunity for experts. For millions of users, Grammarly is a trusted writing sidekick -- ever-present in every application, ready to help. We're opening up this platform so anyone can build agents that work like Grammarly -- expanding from one sidekick to a whole team. Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal. For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has. But in this world, experts choose to participate, shape how their knowledge is represented, and control their business model. That future excites me, and I hope to build it with experts who want to develop it alongside us.

    Read more of this story at Slashdot.



    - Swiss E-Voting Pilot Can't Count 2,048 Ballots After USB Keys Fail To Decrypt Them
    A Swiss e-voting pilot was suspended after officials couldn't decrypt 2,048 ballots because the USB keys needed to unlock them failed. "Three USB sticks were used, all with the correct code, but none of them worked," spokesperson Marco Greiner told the Swiss Broadcasting Corporation's Swissinfo service. The canton government says it "deeply regrets" the incident and has launched an investigation with authorities. The Register reports: Basel-Stadt announced the problem with its e-voting pilot, open to about 10,300 locals living abroad and 30 people with disabilities, last Friday afternoon. It encouraged participants to deliver a paper vote to the town hall or use a polling station but admitted this would not be possible for many. By the close of polling on Sunday, its e-voting system had collected 2,048 votes, but Basel-Stadt officials were not able to decrypt them with the hardware provided, despite the involvement of IT experts. [...] The votes made up less than 4 percent of those cast in Basel-Stadt and would not have changed any results, but the canton is delaying confirmation of voting figures until March 21 and suspending its e-voting pilot until the end of December, while its public prosecutor's office has started criminal proceedings. The country's Federal Chancellery said e-voting in three other cantons -- Thurgau, Graubunden, and St Gallen -- along with the nationally used Swiss Post e-voting system, had not been affected.

    Read more of this story at Slashdot.



    - Binance Sues WSJ, Panicked By Gov't Probes Into Sanctioned Crypto Transfers
    An anonymous reader quotes a report from Ars Technica: Binance is hoping that suing (PDF) The Wall Street Journal for defamation might help shake off a fresh round of government probes into how the cryptocurrency exchange failed to detect $1.7 billion in transfers to a network that was funding Iran-backed terror groups. The lawsuit comes after a Wall Street Journal investigation, based on conversations with insiders and reviews of internal documents, reported that Binance had quietly dismantled its own investigation into the unlawful transfers and then fired compliance staff who initially flagged them. Alleging that the report falsely accused Binance of retaliation -- among 10 other allegedly false claims -- Binance accused the Journal of conducting a "sham" investigation that intentionally disregarded the company's statements. That included supposedly failing to note that Binance had not closed its investigation into the unlawful transfers. Binance's role in the large-scale violation of US sanctions laws is currently being investigated by the Justice and Treasury Departments. Congress members also took notice, including Sen. Richard Blumenthal (D-Conn.), ranking member of the Senate Permanent Subcommittee on Investigations (PSI), who launched an additional inquiry. In a letter to Binance CEO Richard Teng, Blumenthal cited the Journal's report, as well as reporting from The New York Times and Fortune, while demanding that Binance explain how it managed to overlook the money-laundering for so long and why compliance staff members were fired. In its complaint Wednesday, Binance claimed that these probes may "be just the tip of the iceberg" if the record is not corrected. The reputational harm is particularly damaging, the exchange noted, since Binance has allegedly worked hard to strengthen its compliance after reaching a settlement with the US government in 2023. In taking that plea deal, Binance admitted to violating anti-money laundering and sanctions laws and paid a $4.3 billion fine, and its founder, Changpeng Zhao, eventually pled guilty to a related charge. Since that scandal, Binance claimed that the WSJ has "made a business of maligning both the cryptocurrency industry generally and Binance specifically." That's why the Journal allegedly rushed to publish its story following a similar New York Times investigation. Alleging that the WSJ was financially motivated to publish a negative story that would get more clicks, Binance claimed the Journal provided little time to respond and then failed to make necessary corrections before and after publication.

    Read more of this story at Slashdot.



    - Nvidia Is Planning to Launch Its Own Open-Source OpenClaw Competitor
    Nvidia is preparing to launch an open-source AI agent platform called NemoClaw, designed to compete with the likes of OpenClaw. According to Wired, the platform will allow enterprise software companies to dispatch AI agents to perform tasks for their own workforces. "Companies will be able to access the platform regardless of whether their products run on Nvidia's chips," the report adds. From the report: The move comes as Nvidia prepares for its annual developer conference in San Jose next week. Ahead of the conference, Nvidia has reached out to companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike to forge partnerships for the agent platform. It's unclear whether these conversations have resulted in official partnerships. Since the platform is open source, it's likely that partners would get free, early access in exchange for contributing to the project, sources say. Nvidia plans to offer security and privacy tools as part of this new open-source agent platform. [...] For Nvidia, NemoClaw appears to be part of an effort to court enterprise software companies by offering additional layers of security for AI agents. It's also another step in the company's embrace of open-source AI models, part of a broader strategy to maintain its dominance in AI infrastructure at a time when leading AI labs are building their own custom chips. Nvidia's software strategy until now has been heavily reliant on its CUDA platform, a famously proprietary system that locks developers into building software for Nvidia's GPUs and has created a crucial "moat" for the company.

    Read more of this story at Slashdot.



    - YouTube Expands AI Deepfake Detection To Politicians, Government Officials, and Journalists
    YouTube is expanding its AI deepfake detection tools to a pilot group of politicians, government officials, and journalists, allowing them to identify and request removal of unauthorized AI-generated videos impersonating them. TechCrunch reports: The technology itself launched last year to roughly 4 million YouTube creators in the YouTube Partner Program, following earlier tests. Similar to YouTube's existing Content ID system, which detects copyright-protected material in users' uploaded videos, the likeness detection feature looks for simulated faces made with AI tools. These tools are sometimes used to try to spread misinformation and manipulate people's perception of reality, as they leverage the deepfaked personas of notable figures -- like politicians or other government officials -- to say and do things in these AI videos that they didn't in real life. With the new pilot program, YouTube aims to balance users' free expression with the risks associated with AI technology that can generate a convincing likeness of a public figure. [...] [Leslie Miller, YouTube's vice president of Government Affairs and Public Policy] explained that not all of the detected matches would be removed when requested. Instead, YouTube would evaluate each request under its existing privacy policy guidelines to determine whether the content is parody or political critique, which are protected forms of free expression. The company noted it's advocating for these protections at a federal level, too, with its support for the NO FAKES Act in D.C., which would regulate the use of AI to create unauthorized recreations of an individual's voice and visual likeness. To use the new tool, eligible pilot testers must first prove their identity by uploading a selfie and a government ID. They can then create a profile, view the matches that show up, and optionally request their removal. YouTube says it plans to eventually give people the ability to prevent uploads of violating content before they go live or, possibly, allow them to monetize those videos, similar to how its Content ID system works. The company would not confirm which politicians or officials would be among its initial testers, but said the goal is to make the technology broadly available over time.

    Read more of this story at Slashdot.



    - China Moves To Curb OpenClaw AI Use At Banks, State Agencies
    An anonymous reader quotes a report from Bloomberg: Chinese authorities moved to restrict state-run enterprises and government agencies from running OpenClaw AI apps on office computers, acting swiftly to defuse potential security risks after companies and consumers across China began experimenting with the agentic AI phenomenon. Government agencies and state-owned enterprises, including the largest banks, have received notices in recent days warning them against installing OpenClaw software on office devices for security reasons [...]. Several of them were instructed to notify superiors if they had already installed related apps for security checks and possible removal, some of the people said. Certain employees, including those at state-run banks and some government agencies, were banned from installing OpenClaw on office computers and also personal phones using the company's network, some of the people said. One person said the ban was also extended to the families of military personnel. Other notices stopped short of calling for an outright ban on OpenClaw software, saying only that prior approval is needed before use, the people said. The warning underscores Beijing's growing concern about OpenClaw, an agentic AI platform that requires unusually broad access to private data and can communicate externally, potentially exposing computers to external attack. [...] Despite the potential security risks, companies from Tencent to JD.com Inc. have been rolling out OpenClaw apps to try and capitalize on the groundswell of enthusiasm, while several local government agencies have declared millions of yuan in subsidies for companies that develop atop the platform. [...] Tech giants like Tencent and Alibaba, along with AI upstarts ranging from Moonshot to MiniMax, have rolled out their own tweaks of the software touting simple, one-click adoption. A slew of government agencies, in cities from Shenzhen to Wuxi, have issued notices offering multimillion-yuan subsidies to startups leveraging OpenClaw to make advances. The frenzy has helped drive up shares of AI model developer MiniMax nearly 640% since its listing just two months ago. It's now worth about $49 billion, surpassing Baidu -- once viewed as the frontrunner in Chinese AI development -- in market value. The company launched MaxClaw, an agent built on OpenClaw, in late February.

    Read more of this story at Slashdot.



    - ASUS Executive Says MacBook Neo is 'Shock' to PC Industry
    ASUS says the MacBook Neo is a "shock" to the Windows PC ecosystem. "In the past, Apple's pricing situation has always been high, so for them to release a very budget-friendly product, this is obviously a shock to the entire industry," said ASUS co-CEO S.Y. Hsu in a Tuesday earnings call. While he expects PC makers to respond, rising AI-driven memory shortages could push hardware prices higher across the industry. PCMag reports: Hsu said he believes all the PC players -- including Microsoft, Intel, and AMD -- take the MacBook Neo threat seriously. "In fact, in the entire PC ecosystem, there have been a lot of discussions about how to compete with this product," he added, given that rumors about the MacBook Neo have been making the rounds for at least a year. Despite the competitive threat, Hsu argued that the MacBook Neo could have limited appeal. He pointed to the laptop's 8GB of "unified memory," or what amounts to its RAM, and how customers can't upgrade it. He also described the MacBook Neo as a "content consumption" device, similar to an iPad. "This is different from the use case of a mainstream notebook," which can handle more compute-intensive tasks, Hsu said. "How big of an impact [the MacBook Neo] will have on the PC industry will still require some time for us to observe," Hsu said while suggesting it might not gain traction among Windows PC users due to software differences. "Of course, the entire Windows PC ecosystem will push out products to compete against Apple," he added.

    Read more of this story at Slashdot.



    - Meta To Charge Advertisers a Fee To Offset Europe's Digital Taxes
    Meta will begin charging advertisers a 2-5% "location fee" to offset digital services taxes imposed by several European countries, including the UK, France, Italy, Spain, Austria, and Turkey. Reuters reports: The fee, for image or video ads delivered on Meta platforms including WhatsApp click-to-message campaigns and marketing messages together with ads, will apply from July 1 and will also cover other government-imposed levies. "Until now, Meta has covered these additional costs. These changes are part of Meta's ongoing effort to respond to the evolving regulatory landscape and align with industry standards," the company said in the blog. The location fees are determined by where the audience is located and not the advertisers' business location. Meta listed six countries where the fees will apply, ranging from 2% in the United Kingdom to 3% in France, Italy and Spain and 5% in Austria and Turkey.

    Read more of this story at Slashdot.



    - Yann LeCun Raises $1 Billion To Build AI That Understands the Physical World
    An anonymous reader quotes a report from Wired: Advanced Machine Intelligence (AMI), a new Paris-based startup cofounded by Meta's former chief AI scientist Yann LeCun, announced Monday it has raised more than $1 billion to develop AI world models. LeCun argues that most human reasoning is grounded in the physical world, not language, and that AI world models are necessary to develop true human-level intelligence. "The idea that you're going to extend the capabilities of LLMs [large language models] to the point that they're going to have human-level intelligence is complete nonsense," he said in an interview with WIRED. The financing, which values the startup at $3.5 billion, was co-led by investors such as Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Other notable backers include Mark Cuban, former Google CEO Eric Schmidt, and French billionaire and telecommunications executive Xavier Niel. AMI (pronounced like the French word for friend) aims to build "a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe," the company says in a press release. The startup says it will be global from day one, with offices in Paris, Montreal, Singapore, and New York, where LeCun will continue working as a New York University professor in addition to leading the startup. AMI will be the first commercial endeavor for LeCun since his departure from Meta in November 2025. [...] LeCun says AMI aims to work with companies in manufacturing, biomedical, robotics, and other industries that have lots of data. For example, he says AMI could build a realistic world model of an aircraft engine and work with the manufacturer to help them optimize for efficiency, minimize emissions, or ensure reliability. LeCun says AMI will release its first AI models quickly, but he's not expecting most people to take notice. The company will first work with partners such as Toyota and Samsung, and then will learn how to apply its technology more broadly. Eventually, he says, AMI intends to develop a "universal world model," which would be the basis for a generally intelligent system that could help companies regardless of what industry they work in. "It's very ambitious," he says with a smile.

    Read more of this story at Slashdot.



    - Valve Faces Second, Class-Action Lawsuit Over Loot Boxes
    Valve is facing a new consumer class-action lawsuit two weeks after New York sued the video game company for "letting children and adults illegally gamble" with loot boxes. The new lawsuit is similar, alleging that loot boxes in games like Counter-Strike 2, Dota 2, and Team Fortress 2 are "carefully engineered to extract money from consumers, including children, through deceptive, casino-style psychological tactics." "We believe Valve deliberately engineered its gambling platform and profited enormously from it," Steve Berman, founder and managing partner at law firm Hagens Berman, said in a press release. "Consumers played these games for entertainment, unaware that Valve had allegedly already stacked the odds against them. We intend to hold Valve accountable and put money back in the pockets of consumers." PC Gamer reports: The system is well known to anyone who's played a Valve multiplayer game: Earn a locked loot box by playing, pay $2.50 for a key, unlock it, get a digital doohickey that's sometimes worth hundreds or even thousands of dollars but far more often is worth just a few pennies. Is that gambling? If these cases go to court, we'll find out. The full complaint points out that the unlocking process is even designed to look like a slot machine: "Images of possible items scroll across the screen, spinning fast at first, then slowing to a stop on the player's 'prize.' Players buy and open loot boxes for the same reason people play slot machines -- the hope of a valuable payout." Loot boxes, the complaint continues, are not "incidental features" of Valve's games, but rather "a deliberate, carefully engineered revenue model." So too is the Steam Community Market, and Steam itself, which the suit claims is "deliberately designed" to enable the sale of digital items on third-party marketplaces through "trade URLs," despite Valve's terms of service prohibiting off-platform sales. And while the debate over whether loot boxes constitute a form of gambling continues to rage, the suit claims Valve's system does indeed qualify under Washington law, which defines gambling as "staking or risking something of value upon the outcome of a contest of chance or a future contingent event not under the person's control or influence." "Valve's loot boxes satisfy every element of this definition," the lawsuit alleges. "Users stake money (the price of a key) on the outcome of a contest of chance (the random selection of a virtual item), and the items received are 'things of value' under RCW 9.46.0285 because they can be sold for real money through Valve's own marketplace and through third-party marketplaces that Valve has fostered and facilitated."

    Read more of this story at Slashdot.



    - A 1,300-Pound NASA Spacecraft To Re-Enter Earth's Atmosphere
    Van Allen Probe A, a 1,300-pound (600 kg) NASA satellite launched in 2012 to study Earth's radiation belts, is expected to re-enter Earth's atmosphere this week. While most of it is expected to burn up during descent, "some components may survive," reports the BBC. "The space agency said there is a one in 4,200 chance of being harmed by a piece of the probe, which it characterized as 'low' risk." From the report: The spacecraft is projected to re-enter around 19:45 EST (00:45 GMT) on Tuesday the U.S. Space Force predicted, according to Nasa, though there is a 24-hour margin of "uncertainty" in the timing. [...] The spacecraft and its twin, Van Allen Probe B, were on a mission to gather unprecedented data on Earth's two permanent radiation belts. It was not immediately clear where in Earth's atmosphere the satellite is projected to re-enter. NASA and the U.S. Space Force has said it will monitor the re-entry and update any predictions. [...] Van Allen Probe B is not expected to re-enter the Earth's atmosphere before 2030.

    Read more of this story at Slashdot.



    - After Outages, Amazon To Make Senior Engineers Sign Off On AI-Assisted Changes
    An anonymous reader quotes a report from the Financial Times: Amazon's ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a "deep dive" into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a "trend of incidents" in recent months, characterized by a "high blast radius" and "Gen-AI assisted changes" among other factors, according to a briefing note for the meeting seen by the FT. Under "contributing factors" the note included "novel GenAI usage for which best practices and safeguards are not yet fully established." "Folks, as you likely know, the availability of the site and related infrastructure has not been good recently," Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT. The note ahead of Tuesday's meeting did not specify which particular incidents the group planned to discuss. [...] Treadwell, a former Microsoft engineering executive, told employees that Amazon would focus its weekly "This Week in Stores Tech" (TWiST) meeting on a "deep dive into some of the issues that got us here as well as some short immediate term initiatives" the group hopes will limit future outages. He asked staff to attend the meeting, which is normally optional. Junior and mid-level engineers will now require more senior engineers to sign off any AI-assisted changes, Treadwell added. Amazon said the review of website availability was "part of normal business" and it aims for continual improvement. "TWiST is our regular weekly operations meeting with a specific group of retail technology leaders and teams where we review operational performance across our store," the company said.

    Read more of this story at Slashdot.



    - Tony Hoare, Turing Award-Winning Computer Scientist Behind QuickSort, Dies At 92
    Tony Hoare, the Turing Award-winning pioneer who created the Quicksort algorithm, developed Hoare logic, and advanced theories of concurrency and structured programming, has died at age 92. News of his passing was shared today in a blog post. The site I Programmer also commemorated Hoare in a post highlighting his contributions to computer science and the lasting impact of his work. Personal accounts have been shared on Hacker News and Reddit. Many Slashdotters may know Hoare for his aphorism regarding software design: "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult."

    Read more of this story at Slashdot.



    - Intel Demos Chip To Compute With Encrypted Data
    An anonymous reader quotes a report from IEEE Spectrum: Worried that your latest ask to a cloud-based AI reveals a bit too much about you? Want to know your genetic risk of disease without revealing it to the services that compute the answer? There is a way to do computing on encrypted data without ever having it decrypted. It's called fully homomorphic encryption, or FHE. But there's a rather large catch. It can take thousands -- even tens of thousands -- of times longer to compute on today's CPUs and GPUs than simply working with the decrypted data. So universities, startups, and at least one processor giant have been working on specialized chips that could close that gap. Last month at the IEEE International Solid-State Circuits Conference (ISSCC) in San Francisco, Intel demonstrated its answer, Heracles, which sped up FHE computing tasks as much as 5,000-fold compared to a top-of the-line Intel server CPU. Startups are racing to beat Intel and each other to commercialization. But Sanu Mathew, who leads security circuits research at Intel, believes the CPU giant has a big lead, because its chip can do more computing than any other FHE accelerator yet built. "Heracles is the first hardware that works at scale," he says. The scale is measurable both physically and in compute performance. While other FHE research chips have been in the range of 10 square millimeters or less, Heracles is about 20 times that size and is built using Intel's most advanced, 3-nanometer FinFET technology. And it's flanked inside a liquid-cooled package by two 24-gigabyte high-bandwidth memory chips—a configuration usually seen only in GPUs for training AI. In terms of scaling compute performance, Heracles showed muscle in live demonstrations at ISSCC. At its heart the demo was a simple private query to a secure server. It simulated a request by a voter to make sure that her ballot had been registered correctly. The state, in this case, has an encrypted database of voters and their votes. To maintain her privacy, the voter would not want to have her ballot information decrypted at any point; so using FHE, she encrypts her ID and vote and sends it to the government database. There, without decrypting it, the system determines if it is a match and returns an encrypted answer, which she then decrypts on her side. On an Intel Xeon server CPU, the process took 15 milliseconds. Heracles did it in 14 microseconds. While that difference isn't something a single human would notice, verifying 100 million voter ballots adds up to more than 17 days of CPU work versus a mere 23 minutes on Heracles.

    Read more of this story at Slashdot.



    - Amazon Wins Court Order To Block Perplexity's AI Shopping Bots
    Last November, Amazon sued Perplexity demanding that the AI search startup stop allowing its AI browser agent, Comet, to make purchases for users online. Today, a judge ruled in favor of the tech giant, granting it a temporary court injunction blocking the scraping of Amazon's website. According to court filings, the judge found strong evidence the tool accessed the retailer's systems "without authorization." CNBC reports: In a ruling dated Monday, U.S. District Judge Maxine Chesney wrote that Amazon has provided "strong evidence" that Perplexity's Comet browser accessed its website at the user's direction, but "without authorization" from the e-commerce giant. Chesney said Amazon submitted "essentially undisputed evidence" that it spent more than $5,000 to respond to the issue, including "numerous hours" where its employees worked to develop tools to block Comet from accessing its private customer tools and to prevent the tool from "future unauthorized access." "Given such evidence, the Court finds Amazon has shown a likelihood of success on the merits of its claim," Chesney wrote. Chesney's ruling includes a weeklong stay to allow Perplexity to appeal the order. Amazon wrote in its original complaint that Perplexity's agents posed security risks to customer data because they "can act within protected computer systems, including private customer accounts requiring a password." The company also said Perplexity's agents created challenges for the company's advertising business, because when AI systems generate ad traffic, the impressions have to be detected and filtered out before advertisers can be billed. "This requires modifications to Amazon's advertising systems, including developing new detection mechanisms to identify and exclude automated traffic," Amazon wrote in its complaint. "These system adaptations are necessary to maintain contractual obligations with advertisers who pay only for legitimate human impressions."

    Read more of this story at Slashdot.





Old Board