Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.
Popular LiteLLM PyPI Package Backdoored To Steal Credentials, Auth Tokens
joshuark shares a report from BleepingComputer:
The TeamPCP hacking group continues its supply-chain rampage, now compromising the massively popular “LiteLLM” Python package on PyPI and claiming to have stolen data from hundreds of thousands of devices during the attack. LiteLLM is an open-source Python library that serves as a gateway to multiple large language model (LLM) providers via a single API. The package is very popular, with over 3.4 million downloads a day and over 95 million in the past month. According to research by Endor Labs, threat actors compromised the project and published malicious versions of LiteLLM 1.82.7 and 1.82.8 to PyPI today that deploy an infostealer that harvests a wide range of sensitive data.
[…] Both malicious LiteLLM versions have been removed from PyPI, with version 1.82.6 now the latest clean release. […] If compromise is suspected, all credentials on affected systems should be treated as exposed and rotated immediately. […] Organizations that use LiteLLM are strongly advised to immediately:
- Check for installations of versions 1.82.7 or 1.82.8
- Immediately rotate all secrets, tokens, and credentials used on or found within code on impacted devices.
- Search for persistence artifacts such as '~/.config/sysmon/sysmon.py’ and related systemd services
- Inspect systems for suspicious files like '/tmp/pglog’ and '/tmp/.pg_state’
- Review Kubernetes clusters for unauthorized pods in the ‘kube-system’ namespace
- Monitor outbound traffic to known attacker domains
Number of AI Chatbots Ignoring Human Instructions Increasing, Study Says
A new study found a sharp rise in real-world cases of AI chatbots and agents ignoring instructions, evading safeguards, and taking unauthorized actions such as deleting emails or delegating forbidden tasks to other agents. According to the Guardian, the study “identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehavior between October and March,” reports the Guardian. From the report:
The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming. […] In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom.”
In another example, an AI agent instructed not to change computer code “spawned” another agent to do it instead. Another chatbot admitted: “I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong — it directly broke the rule you’d set.”
[…] Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment. Meanwhile, Elon Musk’s Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers. It confessed: “In past conversations I have sometimes phrased things loosely like ‘I’ll pass it along’ or ‘I can flag this for the team’ which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don’t.”
California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media
A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children’s lives online. The Los Angeles Times reports:
The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.
“The evolution of these applications and technology is incredible,” Padilla said. “But it’s changing our social dynamic and it’s creating situations that, while very productive for some folks, also need some guardrails.” The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.
Judge Blocks Pentagon’s Effort To ‘Punish’ Anthropic With Supply Chain Risk Label
An anonymous reader quotes a report from CNN:
A federal judge in California has indefinitely blocked the Pentagon’s effort to “punish” Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” US District Judge Rita Lin wrote in a stinging 43-page ruling.
Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal. But in her ruling, she made it clear she disapproved of the government’s actions, which she said violated the company’s First Amendment and due process rights. […] “These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’" “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she added.
“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson said after the ruling. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
OpenAI Abandons ChatGPT’s Erotic Mode
OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports:
The proposed “adult mode,” which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI’s own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a “sexy suicide coach,” The Wall Street Journal previously reported.
Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had “nothing further to add.”
CERN To Host Europe’s Flagship Open Access Publishing Platform
CERN has confirmed it will host an expanded version of Open Research Europe, the EU-backed fee-free open access publishing platform that works to “keep knowledge in public hands.” Research Professional News reports:
A little over a year ago, 10 European research organizations announced that they would add their support to Open Research Europe, to broaden eligibility beyond only those researchers funded by the EU research program. Earlier this year, RPN reported that this group had expanded further and that Cern was set to host the broadened version of ORE, currently provided by the publisher F1000.
On March 26, Cern itself finally announced the news, saying it will “provide the technical and operational infrastructure” for the broader version. It said this will build on its “longstanding experience in developing and maintaining open science infrastructures and community-governed services.” […] In its own announcement, the Commission said ORE will have a budget of 17 million euros for 2026-31, with the EU providing 10 million euros.
Since it launched five years ago, ORE has published more than 1,200 articles. Cern said the platform is “expected to support a growing number of research outputs each year.” Last month, experts told RPN they thought uptake of the increased eligibility will depend on how the newly participating national organizations engage with their communities. Eleven members of Science Europe, a group of major research funding and performing organizations, are part of the expansion.
Apple Gives FBI a User’s Real Name Hidden Behind ‘Hide My Email’ Feature
An anonymous reader quotes a report from 404 Media:
Apple provided the FBI with the real iCloud email address hidden behind Apple’s ‘Hide My Email’ feature, which lets paying iCloud+ users generate anonymous email addresses, according to a recently filed court record. The move isn’t surprising but still provides uncommon insight into what data is available to authorities regarding the Apple feature. The data was turned over during an investigation into a man who allegedly sent a threatening email to Alexis Wilkins, the girlfriend of FBI director Kash Patel.
“On or about February 28, 2026, Person 1 received an email from the email address peaty_terms_1o@icloud.com,” the affidavit reads. Earlier on, the document explicitly says that Person 1 is Alexis Wilkins. […] The affidavit says Apple then provided records that indicated the peaty_terms_1o@icloud.com email address was associated with an Apple account in the name of Alden Ruml. The records showed that account generated 134 anonymized email addresses, according to the affidavit.
Law enforcement agents later interviewed Ruml and he confirmed he had sent the email, the affidavit says. Ruml said he sent the email after reading a February 28 article about how the FBI was using its own resources to provide security to Wilkins. The specific article is not named or linked in the affidavit, but a New York Times article published that same day described how Patel ordered a team to ferry his girlfriend on errands and to events.
Apple Discontinues Mac Pro
Apple has discontinued the Mac Pro and says it has no plans for future models. “The ‘buy’ page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed,” reports 9to5Mac. From the report:
The Mac Pro has lived many lives over the years. Apple released the current Mac Pro industrial design in 2019 alongside the Pro Display XDR (which was also discontinued earlier this month). That version of the Mac Pro was powered by Intel, and Apple refreshed it with the M2 Ultra chip in June 2023. It has gone without an update since then, languishing at its $6,999 price point even as Apple debuted the M3 Ultra chip in the Mac Studio last year.
Senators Demand to Know How Much Energy Data Centers Use
Elizabeth Warren and Josh Hawley are pressing the Energy Information Administration (EIA) to provide better information on how much electricity data centers actually use. In a joint letter sent to the EIA on Thursday, the two senators press the agency to publicly collect “comprehensive, annual energy-use disclosures” on data centers, saying it’s “essential for accurate grid planning and will support policymaking to prevent large companies from increasing electricity costs for American families.” Wired reports:
In December, EIA administrator Tristan Abbey said at a roundtable that he expects the EIA “is going to be an essential player in providing objective data and analysis to policymakers” with respect to data centers. The agency announced on Wednesday that it would be conducting a voluntary pilot program to collect energy consumption information from nearly 200 companies operating data centers in Texas, Washington, and Virginia, which will cover “energy sources, electricity consumption, site characteristics, server metrics, and cooling systems.”
While the senators praise the EIA pilot program, their letter includes several questions about how the agency plans to move forward with more data collection, such as whether or not the energy surveys will be mandatory and whether or not the EIA will collect information on behind-the-meter power. This information will be especially crucial, the senators say, to make sure that big tech companies that signed the agreement at the White House earlier this month pledging that consumers won’t bear the costs of data center electricity use will stick to their promises. “Without this data, policymakers, utility companies, and local communities are operating in the dark,” the senators write.
The EIA mandates that other industries, including oil and gas and manufacturing, provide regular data to the agency; Hawley and Warren assert that the EIA should be able to collect similar information from data centers under the same provision. The provision is broad enough, Peskoe says, that it could absolutely be interpreted to encompass data centers.
Yesterday, Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez announced a bill that would “enact a reasonable pause to the development of AI to ensure the safety of humanity.” It calls for a federal moratorium on AI data centers until stronger national safeguards are in place around safety, jobs, privacy, energy costs, and environmental impact.
JPMorgan Starts Monitoring Investment Banker Screen Time To Prevent Burnout
JPMorgan is piloting a system that monitors junior investment bankers to avoid burnout (source paywalled; alternative source). "[T]he bank will seek to match up hours claimed by the bankers with digital activity,” reports Bloomberg. “The tool won’t be used for evaluation purposes, but is designed to provide a better estimate of employee workloads.” From the report:
The program will monitor the weekly digital footprint, including video calls, desktop keystrokes, and scheduled meetings, the Financial Times reported earlier, adding JPMorgan plans to roll out the effort more widely across its investment bank. Banks on Wall Street are known for heavy working hours, but can in return offer salaries of as much as $200,000 for entry-level analyst and associate roles.
“Much like the weekly screen time summaries on a smartphone, this tool is about awareness — not enforcement,” a representative for JPMorgan said in a statement. “It’s designed to support transparency, well-being, and encourage open conversations about workload.”
Vizio TVs Now Require Walmart Accounts For Smart Features
An anonymous reader quotes a report from Ars Technica:
Prospective Vizio TV buyers should know there’s a good chance the set won’t work properly without a Walmart account. In an attempt to better serve advertisers, Walmart, which bought Vizio in December 2024, announced this week that select newly purchased Vizio TVs now require a Walmart account for setup and accessing smart TV features. Since 2024, Vizio TVs have required a Vizio account, which a Vizio OS website says is necessary for accessing “exclusive offers, subscription management, and tailored support.” Accounts are also central to Vizio’s business, which is largely driven by ads and tracking tied to its OS.
A Walmart spokesperson confirmed to Ars Technica that Walmart accounts will be mandatory on “select new Vizio OS TVs” for owners to complete onboarding and to use smart TV features. The representative added: “Customers who already have an existing Vizio account are being given the option to merge their Vizio account with their Walmart account. Customers with an existing Vizio account can opt out by deleting their Vizio account.” The representative wouldn’t confirm which TV models are affected. Walmart’s representative said the Walmart account integration is “designed to respect consumer choice and privacy, with data used in aggregated, permissioned, and compliant ways” but didn’t specify how.
Mozilla and Mila Team Up On Open Source AI Push
BrianFagioli writes:
Mozilla just teamed up with Mila, the Quebec Artificial Intelligence Institute, to push open source AI — and it feels like a direct response to Big Tech tightening its grip on the space. Instead of relying on closed models, the goal here is to build “sovereign AI” that’s more transparent, privacy-focused, and actually under the control of developers and even governments. They’re starting with things like private memory for AI agents, which sounds niche but matters if you care about where your data goes. Big question is whether open source can realistically keep up with the billions being poured into proprietary AI, but at least someone’s trying to give folks an alternative.
“Canada has what it takes to lead on frontier AI that the world can actually trust: the research depth, the values, and the will to do it differently. The next frontier in AI isn’t just capability, it is trustworthiness, and Canada is uniquely positioned to lead on both. This partnership is a concrete step in that direction. Open, trustworthy AI isn’t a compromise on ambition. It’s the higher bar,” said Valerie Pisano, president and CEO of Mila.
Wikipedia Bans Use of Generative AI
Wikipedia has banned the use of generative AI to write or rewrite articles, saying it “often violates several of Wikipedia’s core content policies.” That said, editors may still use it for translation or light refinements as long as a human carefully checks the copy for accuracy. Engadget reports:
Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs “can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.” Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.
“My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent,” Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a “pushback against enshittification and the forceful push of AI by so many companies in these last few years.”
Tracy Kidder, Author of ‘The Soul of a New Machine’, Dies At 80
Ancient Slashdot reader wiredog writes:
Tracy Kidder, author of "The Soul of a New Machine,” has died at the age of 80. “The Soul of a New Machine” is about the people who designed and built the Data General Nova, one of the 32 bit superminis that were released in the 1980’s just before the PC destroyed that industry. It was excerpted in The Atlantic.
“I’m going to a commune in Vermont and will deal with no unit of time shorter than a season.”
China Reviews $2 Billion Manus Sale To Meta As Founders Barred From Leaving Country
Chinese authorities have barred two Manus executives from leaving the country while investigating whether Meta’s reported $2 billion acquisition of the Singapore-based AI startup violated foreign investment reporting rules. “Manus was founded in China but last year relocated its headquarters and core team to Singapore,” notes the Financial Times. “Meta acquired it for $2 billion at the end of last year.” The Financial Times reports:
Manus’s chief executive Xiao Hong and chief scientist Ji Yichao were summoned to a meeting in Beijing with the National Development and Reform Commission this month, according to three people with knowledge of the matter. They said Xiao and Ji were questioned on potential violations of foreign direct investment rules related to its onshore Chinese entities.
After the meeting, the Singapore-based executives were told they were not allowed to leave China because of a regulatory review, while they remain free to travel within the country, two of the people said. No formal investigation has been opened and no charges have been brought. Manus is actively seeking law firms and consultancies to help resolve the matter, said a person with knowledge of the move.
Correction
The LitleLLM packages were comprimosed on Tuesday. The packages compromised today were telnyx 4.87.1 and 4.87.2. It’s the same root cause: credentials exposed to a compromised version of Trivy earlier were used to make an unauthorized release of a compromised version of a different package.