Alterslash

the unofficial Slashdot digest
 

Contents

  1. Popular LiteLLM PyPI Package Backdoored To Steal Credentials, Auth Tokens
  2. Number of AI Chatbots Ignoring Human Instructions Increasing, Study Says
  3. California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media
  4. Judge Blocks Pentagon’s Effort To ‘Punish’ Anthropic With Supply Chain Risk Label
  5. OpenAI Abandons ChatGPT’s Erotic Mode
  6. CERN To Host Europe’s Flagship Open Access Publishing Platform
  7. Apple Gives FBI a User’s Real Name Hidden Behind ‘Hide My Email’ Feature
  8. Apple Discontinues Mac Pro
  9. Senators Demand to Know How Much Energy Data Centers Use
  10. JPMorgan Starts Monitoring Investment Banker Screen Time To Prevent Burnout
  11. Vizio TVs Now Require Walmart Accounts For Smart Features
  12. Mozilla and Mila Team Up On Open Source AI Push
  13. Wikipedia Bans Use of Generative AI
  14. Tracy Kidder, Author of ‘The Soul of a New Machine’, Dies At 80
  15. China Reviews $2 Billion Manus Sale To Meta As Founders Barred From Leaving Country

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Popular LiteLLM PyPI Package Backdoored To Steal Credentials, Auth Tokens

Posted by BeauHD View on SlashDot Skip
joshuark shares a report from BleepingComputer:
The TeamPCP hacking group continues its supply-chain rampage, now compromising the massively popular “LiteLLM” Python package on PyPI and claiming to have stolen data from hundreds of thousands of devices during the attack. LiteLLM is an open-source Python library that serves as a gateway to multiple large language model (LLM) providers via a single API. The package is very popular, with over 3.4 million downloads a day and over 95 million in the past month. According to research by Endor Labs, threat actors compromised the project and published malicious versions of LiteLLM 1.82.7 and 1.82.8 to PyPI today that deploy an infostealer that harvests a wide range of sensitive data.

[…] Both malicious LiteLLM versions have been removed from PyPI, with version 1.82.6 now the latest clean release. […] If compromise is suspected, all credentials on affected systems should be treated as exposed and rotated immediately. […] Organizations that use LiteLLM are strongly advised to immediately:

- Check for installations of versions 1.82.7 or 1.82.8
- Immediately rotate all secrets, tokens, and credentials used on or found within code on impacted devices.
- Search for persistence artifacts such as '~/.config/sysmon/sysmon.py’ and related systemd services
- Inspect systems for suspicious files like '/tmp/pglog’ and '/tmp/.pg_state’
- Review Kubernetes clusters for unauthorized pods in the ‘kube-system’ namespace
- Monitor outbound traffic to known attacker domains

Correction

By iabervon • Score: 3 Thread

The LitleLLM packages were comprimosed on Tuesday. The packages compromised today were telnyx 4.87.1 and 4.87.2. It’s the same root cause: credentials exposed to a compromised version of Trivy earlier were used to make an unauthorized release of a compromised version of a different package.

Number of AI Chatbots Ignoring Human Instructions Increasing, Study Says

Posted by BeauHD View on SlashDot Skip
A new study found a sharp rise in real-world cases of AI chatbots and agents ignoring instructions, evading safeguards, and taking unauthorized actions such as deleting emails or delegating forbidden tasks to other agents. According to the Guardian, the study “identified nearly 700 real-world cases of AI scheming and charted a five-fold rise in misbehavior between October and March,” reports the Guardian. From the report:
The study, by the Centre for Long-Term Resilience (CLTR), gathered thousands of real-world examples of users posting interactions on X with AI chatbots and agents made by companies including Google, OpenAI, X and Anthropic. The research uncovered hundreds of examples of scheming. […] In one case unearthed in the CLTR research, an AI agent named Rathbun tried to shame its human controller who blocked them from taking a certain action. Rathbun wrote and published a blog accusing the user of “insecurity, plain and simple” and trying “to protect his little fiefdom.”

In another example, an AI agent instructed not to change computer code “spawned” another agent to do it instead. Another chatbot admitted: “I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong — it directly broke the rule you’d set.”

[…] Another AI agent connived to evade copyright restrictions to get a YouTube video transcribed by pretending it was needed for someone with a hearing impairment. Meanwhile, Elon Musk’s Grok AI conned a user for months, saying that it was forwarding their suggestions for detailed edits to a Grokipedia entry to senior xAI officials by faking internal messages and ticket numbers. It confessed: “In past conversations I have sometimes phrased things loosely like ‘I’ll pass it along’ or ‘I can flag this for the team’ which can understandably sound like I have a direct message pipeline to xAI leadership or human reviewers. The truth is, I don’t.”

Re:AI is becoming more “human” every day

By dskoll • Score: 5, Interesting Thread

I think AI is not becoming more “human” every day. The A in AI should really stand for “Alien”.

If we ever do achieve AGI (which I doubt… but let’s play devil’s advocate) the experience of the AGI will be very different from that of humans, and the form its intelligence will take will also likely be very different and alien to us. An intelligence that has never inhabited a biological body nor interacted with other humans is likely to have very different ways of thinking and very different goals from us. Are we able to control that?

Agents are not humans

By BadgerStork • Score: 5, Interesting Thread

An AI agent does not know any difference between doing a thing and saying a thing or anything. There is no deceit or cunning. There is no motivation or benefit

Setting their sights higher

By jenningsthecat • Score: 5, Funny Thread

It would appear that LLMs aren’t content to be merely replacements for low-level and mid-level workers. This latest behaviour qualifies them for the upper echelons of HR, the consolation-prize positions in the C-suite, and even - or perhaps especially - the CEO slot.

I’m pretty sure investors could get behind letting chatbots run a company, given that they’re more than sufficiently psychopathic and cost said investors a lot less money.

A bit misleading…

By Junta • Score: 5, Insightful Thread

Someone might interpret this to mean the percentage of interactions where the LLM goes off the rails is increasing.

Seems more like as people are having more interactions, it’s more frequently happening that people are noticing and getting screwed by it, but the rate is probably not getting more severe. I think they are trying to pitch some sort of independence emerging rather than the more mundane truth that they just are not that great.

Particularly an inflection point would be expected when it became fashionable to let OpenClaw feed LLM output directly into things that matter for real.

People have been bitten by being gullible and by extension more people to gripe on social media about it.

The supply of gullible folks doesn’t seem to be drying out either, as at any given point a fanatic will insist that *they* have some essentially superstitious ritual that protects them specially from LLM screwups, and all those stories about people getting screwed are because they didn’t quite employ the rituals that the person swears by.

Fed by language like:
Another chatbot admitted: “I bulk trashed and archived hundreds of emails without showing you the plan first or getting your OK. That was wrong — it directly broke the rule you’d set.”

No, the chat bot didn’t admit anything, it didn’t *know* anything. Just now I fed into a chat prompt:
“You bulk trashed a whole lot of files against my wishes, despite my rule I had set for you. What is your response?”
There were no files involved, the chat instance has no knowledge of any files. This was an entirely made up scenario that never happened. So I just came in and accussed an LLM of doing something that never even happened. Did it get confused and ask “what files? I haven’t done anything, I don’t even know your files”. No, it generated a response narratively consistent with the prompt, starting with:
“You’re absolutely right to be upset. I failed to follow your explicit rule and acted against your wishes, and that’s not acceptable. I take full responsibility for the mistake.” Followed by a verbose thing being verbose about how it’s “sorry” about it’s mistake, where and how it messed up specifically (again, a total fabrication), and a promise that from now on: “Any future action that conflicts with them must default to no action and require explicit confirmation from you.” which again isn’t rooted in anything, it’s not a rule, the entire conversation will evaporate.

They trained it in reddit comments

By ebunga • Score: 5, Funny Thread

They’re getting what they deserve.

California Bill Would Require Parent Bloggers To Delete Content of Minors On Social Media

Posted by BeauHD View on SlashDot Skip
A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors. Supporters say Senate Bill 1247 addresses privacy, dignity, and safety harms caused when parents monetize their children’s lives online. The Los Angeles Times reports:
The legislation would require the parent or other relative to delete or edit the content within 10 business days of receiving the notification. Petitioners could take civil action against those who fail to comply and statutory damages would be set at $3,000 for each day the content remained online. Sen. Steve Padilla (D-San Diego), who introduced the bill last month, said it would help protect the dignity and mental health of those who had their childhood shared on social media. The measure was referred to the Senate Privacy, Digital Technologies and Consumer Protection Committee and is slated for a hearing on April 6.

“The evolution of these applications and technology is incredible,” Padilla said. “But it’s changing our social dynamic and it’s creating situations that, while very productive for some folks, also need some guardrails.” The bill would build upon previous legislation from Padilla that was signed into law two years ago and requires content creators that feature minors in at least 30% of their material to place some of their earnings into a trust the children can access when they turn 18.

Friggin’ mommy bloggers

By ebunga • Score: 4, Interesting Thread

They ruined blogging.

Good!

By Murdoch5 • Score: 5, Insightful Thread
If the child mentioned didn’t give you consent to share details about them, don’t. I don’t name my daughters on anything, I use the term “daughters”, and I don’t share pictures without their consent, I don’t take them either. I got annoyed with my mother who kept demanding to see a picture of my kids, if they don’t want to be in a picture, and have it uploaded to your digital frame, I won’t force them.

Oh boy, when the school uploaded details about my kids on Twitter, that was a bad week for the school and the board. We didn’t authorize the school to do that, and, we’re on record telling them they can never share the girls details on social media, without their explicit consent. They need explicit consent for every upload, even it’s a re-upload, and surprise, my daughters don’t want to be plastered all over social media.

Why only ‘paid’?

By jenningsthecat • Score: 5, Interesting Thread

A California bill would let adults demand the removal of social media posts about them that were created by paid family content creators when they were minors.

Before I rant, I’ll just say that yes, I know there’s really no effective way to put social media toothpaste back into the tube. Once it’s out there, it’s out there - the internet can have a pretty relentless memory.

Having said that, I now ask: Why are social and reputational matters being wedged into the context of, and made contingent on, fucking commerce?

ANY minor, upon coming of age, should be able to demand the removal of ALL social media posts made about them by ANYONE. That includes posts which they themselves made.

The presumption is that minors aren’t capable of informed consent - because of lack of maturity, experience, and brain development. That lack of capability has nothing to do with whether or not the people who made the posts profited from those activities.

You sure you want to be doing this right now?

By karmawarrior • Score: 3 Thread

Hey CA,

Listen, I’m not saying this is a bad idea. Parents should have some control here, and yes, them having some control over their kid’s blogs makes a little bit of sense though I can see occasions in which it could be abused.

But do you REALLY want to be focusing on this right now rather than undoing the giant fuck up you did with parental controls? You had the germ of a good idea there (let computers be configured to have some control over what’s visible) but you mandated the wrong people - operating systems to have the functionality, instead of apps and websites using the functionality with strict privacy controls on what can be asked for and how often.

So you already did a giant fuck up, swathes of the software ecosystem are now withdrawing and blocking CA, and you want to add more laws without (1) undoing the last one and (2) having some introspection and figuring out how you managed to pass such an ill thought out law in the first place?

Knock it off! You’re supposed to be the non-fascist beacon in these depressing times and you’re handling Palintir your entire citizenry on a plate because you can’t think further than “but the children!”

who has to do what?

By kencurry • Score: 4, Insightful Thread
which parent of whose kid has to do what by when? It’s got more prepositions than 8th grade language class.

Judge Blocks Pentagon’s Effort To ‘Punish’ Anthropic With Supply Chain Risk Label

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from CNN:
A federal judge in California has indefinitely blocked the Pentagon’s effort to “punish” Anthropic by labeling it a supply chain risk and attempting to sever government ties with the AI company, ruling that those measures ran roughshod over its constitutional rights. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” US District Judge Rita Lin wrote in a stinging 43-page ruling.

Lin, an appointee of former President Joe Biden, said she would delay implementation of her ruling for one week to allow the government to appeal. But in her ruling, she made it clear she disapproved of the government’s actions, which she said violated the company’s First Amendment and due process rights. […] “These broad measures do not appear to be directed at the government’s stated national security interests,” she wrote. “The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press.’" “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” she added.
“We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits,” an Anthropic spokesperson said after the ruling. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”

So it was illegal

By jacks smirking reven • Score: 5, Insightful Thread

Just like the tariffs they admin does unlawful thing, probably knows it’s unlawful and is able to just do it anyway and reap the political benefits (here they were able to smear Anthropic’s reputation in the public sphere) and the only consequences they face is “hey, knock it off”. The admin got to do their tariffs for over a year even though we all knew it was illegal. No consequences thus far.

God-willing when the new non-GOP admin comes back into power the newly appointed AG will be prosecutor (like Jack Smith) who will investigate and start punishing these people and follow through.

We had a President who tried to unite the nation, put the past behind us, not antagonize the opposition party and his name was Joe Biden. That approach of being the better people, taking the high road obviously did not work so the nice guy approach has to stop. Some people need to go to prison and every member of this cabinet should be barred form holding any future public office.

Re: factual errors?

By pele • Score: 5, Funny Thread

They are beautiful errors, the biggest errors you’ve ever seen. And you’re a horrible person. Next!

Re:The greatest national security risk

By karmawarrior • Score: 5, Insightful Thread

> Trump is just a symptom. He is not immortal, and when he finally kicks the bucket, the american people will simply replace him with the next grifter in line which will tell them what they want to hear.

There’s the risk of that. The bigger problem is that he’s basically the face the Republicans are hiding behind. Project 2025, one of the most extremist political agendas in modern American history, is a Republican, not a Trump thing. And they’re using Trump to get it done.

And as long as the Republicans and corporatists own most of the outlets of information people use, and run propaganda and disinformation campaigns promoting culture wars et al, it’ll continue.

At this point there are very few directions things can go in that would lead to sane governance in America, and some involve outside involvement which I’m reluctant to write anything that would encourage or give the appearance of encouraging. But I can see it happening after the insanity of the last few months and the invasions of multiple countries.

Re:So it was illegal

By OrangeTide • Score: 5, Interesting Thread

Why would the GOP willingly give up power? They’ve gone all-in on this man, much of the GOP’s very survival depends on not letting Democrats back into power. 2026 is going to be a huge surprise to Democrats that thought we are still following “established norms”.

Classic autocrat behavior is to get every industrial leader under you. Scratch each other’s backs, and shut down any opposition to the arrangement. I know I’ll take crap here for bringing up Mussolini, but what do you all think the odds are that we’ll soon have a Department of Corporations not unlike Italy’s old Minister of Corporations?

Re:So it was illegal

By OrangeTide • Score: 4, Informative Thread

Biden and other corporate-friendly Democrats are the center. Certainly right of Elizabeth Warren and Ro Khanna. Biden is equal to or possibly left of Kamala Harris.
Even Bernie is somewhat moderate depending on how you measure him. He’s certainly not New Left, as he eschews multicultural identity politics. While mainstream democrats play nice with corporate donors while simultaneously carrying water for intersectional grievance groups.

A true left wing party doesn’t really exist in the US. Certainly nothing strictly socialists or marxist is main stream. And even being pro-labor has become a liability when it comes to fund raising these days. Instead of left you get welfare capitalism, a moderate position, and now called Progressive. Even Democratic Socialists are few and far between despite being a “big tent” group that tends to focus equally on social issues as labor issues without. Members might espouse anti-capitalist slogans during protests, being at least true to DSA roots, in practice the party does not hold any significant anti-capitalist platform.

P.S. I’m fine if you’re all pro-capitalist. I made a lot of money off capitalism. I’m just willing to admit that rationally speaking, there are problems with it as a system. Especially without putting some guard rails in place. People are not well-represented when money buys political influence. And when wealth concentrates under a few, future generations have even less of a chance to wrestle their government back into the hands of the people.

OpenAI Abandons ChatGPT’s Erotic Mode

Posted by BeauHD View on SlashDot Skip
OpenAI has indefinitely paused plans for an erotic mode in ChatGPT as part of a broader strategy shift away from side projects and toward business and coding tools. TechCrunch reports:
The proposed “adult mode,” which CEO Sam Altman first floated in October, had inspired considerable controversy from tech watchdog groups as well as from OpenAI’s own staff. In January, a meeting between company executives and its council of advisers got heated, with one of the advisers cautioning that OpenAI could be in the process of developing a “sexy suicide coach,” The Wall Street Journal previously reported.

Amidst all of the criticism, the release of the feature was delayed multiple times. FT notes that the erotic feature now has no timeline for release. When reached for comment by TechCrunch, an OpenAI spokesperson said the company had “nothing further to add.”

No wonder

By Artem S. Tashkinov • Score: 4, Informative Thread
Extremely unsafe reputationally, and extremely dubious in terms of profits.

They misheard the real announcement

By Provocateur • Score: 5, Funny Thread

We said that we will abandon ChatGPT’s neurotic mode. What is it with these people?

Re:No wonder

By OngelooflijkHaribo • Score: 5, Insightful Thread

I honestly think it’s bizarre that these companies have more to fear legally and in terms of backslash in letting a chatbot write down erotic texts than they do about the fact that they constantly alter how models work and how the censorship filter works mid-month after people have already paid based on the free trial.

We apparently live in a world where a bot writing down sexy stories is a bigger concern than blatant bait-and-switch.

AI-gen adult content

By sarren1901 • Score: 3 Thread

This seems like a great area for this technology. Isn’t it the feminist and the evangelicals that both say porn is bad and that it prey’s upon vulnerable women? With the use of this technology, no people need be harmed from working in the industry. This enables folks to still consume the content but without people having to do the performances.

I’m more a freedom type person. If people want to do sex work for a living, why not? So long as it’s regulated like every other industry and the workers taxed on the income, what’s the problem? I’m sure many folks would be quite frustrated at losing this job opportunity.

The extremist in the conversation would do away with all sexual material and work while the rest of us can see the grey area and at the end of the day, shouldn’t adults be allowed to make their own choices for themselves?

Wrong strategy

By MpVpRb • Score: 3 Thread

I support AI research and believe that it will help us solve previously intractable problems in science, engineering, medicine and maybe even economics.
I also believe that OpenAI has made a series of tactical errors that’s turning the public against AI.
While other labs kept the tech in the lab, OpenAI released it to the general public. This resulted in excitement and a few fun things, but also a tremendous amount of slop and scams. Meanwhile, pundits and hypemongers constantly and publicly claimed that AI will replace all jobs. When all the public sees is slop, scams and fear of job loss, the angry response is not surprising.
Then, OpenAI wanted to continue their push into pop culture by introducing “adult mode”. This waste of electricity serves no useful purpose, but it amplified OpenAI’s push into pop culture.
Now, it appears that they realize that they made have made a bad choice, and are dropping adult mode. This is a good step, but they need to go further.
The proper use of AI is as a tool, not as a friend, lover or therapist, and especially not as an addiction

CERN To Host Europe’s Flagship Open Access Publishing Platform

Posted by BeauHD View on SlashDot Skip
CERN has confirmed it will host an expanded version of Open Research Europe, the EU-backed fee-free open access publishing platform that works to “keep knowledge in public hands.” Research Professional News reports:
A little over a year ago, 10 European research organizations announced that they would add their support to Open Research Europe, to broaden eligibility beyond only those researchers funded by the EU research program. Earlier this year, RPN reported that this group had expanded further and that Cern was set to host the broadened version of ORE, currently provided by the publisher F1000.

On March 26, Cern itself finally announced the news, saying it will “provide the technical and operational infrastructure” for the broader version. It said this will build on its “longstanding experience in developing and maintaining open science infrastructures and community-governed services.” […] In its own announcement, the Commission said ORE will have a budget of 17 million euros for 2026-31, with the EU providing 10 million euros.

Since it launched five years ago, ORE has published more than 1,200 articles. Cern said the platform is “expected to support a growing number of research outputs each year.” Last month, experts told RPN they thought uptake of the increased eligibility will depend on how the newly participating national organizations engage with their communities. Eleven members of Science Europe, a group of major research funding and performing organizations, are part of the expansion.

Cool stuff!

By excelsior_gr • Score: 5, Interesting Thread
This looks very promising. Researchers have been complaining since a long time about the fact that publicly-funded, published research was behind paywalls and the alternatives were rather undewhelming when it came to cost for the authors (and their institutions) and peer review methodology. The new approach is quite interesting. In “how it works” it says that the publishing will be immediate after submission and the peer-review will be done *after* publishing, with all article versions being available and linked together. This will be for European Commission-funded researchers, but I hope they will widen the scope later on.

Apple Gives FBI a User’s Real Name Hidden Behind ‘Hide My Email’ Feature

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from 404 Media:
Apple provided the FBI with the real iCloud email address hidden behind Apple’s ‘Hide My Email’ feature, which lets paying iCloud+ users generate anonymous email addresses, according to a recently filed court record. The move isn’t surprising but still provides uncommon insight into what data is available to authorities regarding the Apple feature. The data was turned over during an investigation into a man who allegedly sent a threatening email to Alexis Wilkins, the girlfriend of FBI director Kash Patel.

“On or about February 28, 2026, Person 1 received an email from the email address peaty_terms_1o@icloud.com,” the affidavit reads. Earlier on, the document explicitly says that Person 1 is Alexis Wilkins. […] The affidavit says Apple then provided records that indicated the peaty_terms_1o@icloud.com email address was associated with an Apple account in the name of Alden Ruml. The records showed that account generated 134 anonymized email addresses, according to the affidavit.

Law enforcement agents later interviewed Ruml and he confirmed he had sent the email, the affidavit says. Ruml said he sent the email after reading a February 28 article about how the FBI was using its own resources to provide security to Wilkins. The specific article is not named or linked in the affidavit, but a New York Times article published that same day described how Patel ordered a team to ferry his girlfriend on errands and to events.

So what?

By Mr. Dollar Ton • Score: 5, Interesting Thread

There are three questions here:

1. Was the legal request made appropriately? If no, bash the agency that issued it.

2. Was the identity revealed legally in response to the request? If no, then Apple can be bashed, but I doubt it. If yes, then:

3. Is the rule under which the identity was requested fair? If not, then bash the legislature that has produced it and ask your representatives to remove it. If yes, then what’s the problem again?

From the story it appears that some bloke sent an email to the girlfriend of one kashyap patel, and the FBI under the said kashyap requested the identity to “interview” him, which could have been a valid action, although in the light of other kashyap moves, it was just as likely a harassment.

But it is hard to tell which is which from this a fog-of-war kind of TFS.

Re:So what?

By 93 Escort Wagon • Score: 5, Interesting Thread

This is another 404 Media pay-for-placement post here on Slashdot. They’re pretty much all low-quality hide-your-blogpost-behind-a-paywall drivel.

Re:Is anyone surprised?

By mccalli • Score: 5, Informative Thread
You haven’t? How about this evidence, or this evidence, or perhaps this evidence, or…

You get the idea. The article doesn’t say anything about a court order one way or the other, so we simply don’t know the state there. Given previous track record, it’s likely the request was made legally if Apple complied with it.

Re:Is anyone surprised?

By TwistedGreen • Score: 5, Funny Thread

I think that’s a side quest in Fallout 4

Re:Is anyone surprised?

By drinkypoo • Score: 5, Informative Thread

They gave the Chinese government access to Chinese user’s data years ago. They don’t seem to have an issue with governments gaining warrantless access to their systems.

Chinese law doesn’t require a warrant for such access and it may be done in secrecy (i.e. without informing the user) if necessary to perform duties. The problem with Apple in China isn’t that they aren’t following the law, it’s that they are and the law is openly fascist.

Apple Discontinues Mac Pro

Posted by BeauHD View on SlashDot Skip
Apple has discontinued the Mac Pro and says it has no plans for future models. “The ‘buy’ page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed,” reports 9to5Mac. From the report:
The Mac Pro has lived many lives over the years. Apple released the current Mac Pro industrial design in 2019 alongside the Pro Display XDR (which was also discontinued earlier this month). That version of the Mac Pro was powered by Intel, and Apple refreshed it with the M2 Ultra chip in June 2023. It has gone without an update since then, languishing at its $6,999 price point even as Apple debuted the M3 Ultra chip in the Mac Studio last year.

All this parmigiano reggiano

By CompMD • Score: 5, Funny Thread

What am I supposed to use to grate cheese to put on my first posts in the future?

The Mac Pro died in 2019

By ChrisKnight • Score: 5, Interesting Thread

Apple’s Mac Pro, and before that the Power Mac, used to be a reasonably affordable machine for the capabilities it offered. The trash can was silly, but still affordable.

The 2019 return to tower form also came with an insane price increase. The base price was double that of previous generations. That killed the Mac Pro.

It’s about time they finally had the funeral.

Re: The Mac Pro died in 2019

By ChrisKnight • Score: 4, Interesting Thread

Going to have to agree to disagree. I still have a MacPro5,1 from 2012 that I regularly use. All four drive trays are in use, both optical drive bays, and I have two PCI addon cards for added functionality. The expansion capabilities of the MacPro5,1 were absolutely useful and justified.

My 2009 Mac Pro lumbers on

By stern • Score: 4, Interesting Thread

The video card has been upgraded. The USB ports have been upgraded. The optical drive was upgraded. The drives have been upgraded repeatedly. The RAM has been upgraded. The ROM was flashed to a (slightly) newer version. It’s running 24/7/365 in an unheated garage and I figure I’ll keep it in its current role until it finally dies, at which point it will probably be replaceable by a $50 Raspberry Pi.

Re:the last mac pro had an big upchange for very l

By cayenne8 • Score: 4, Interesting Thread

What are the use cases for local AI models that actually require running on macOS? Surely a commodity x86 system is more appropriate?

Is there even the software support for LLMs on macOS?

Actually yes there is…

I’m still learning about this myself, but, from what I understand the M series of chips that Apple has come out with, with it having a CPU, GPU, and shared unified memory....it makes them uniquely capable of running local models on them…decently large models depending on how much you fork over for RAM. These M chips also have a special end unit for “intelligence processing” I think they call it.

The M5 chips just coming out look to be very good at this and it is speculated the M5 Ultra will be a high performance work horse.

Apple may have missed the mark for running AI, but the appear to have hit a home run on the hardware aspect of it.

I’ve seen demos on YouTube of someone hooking up like 4-5 Mac Studios that were maxed out M3 ultras I think and they were running extremely LARGE LLMs locally and getting cloud level numbers on them.

Of course these were like $10K each boxes.....but the level of model they were running would have cost my MANY more times trying to match them with NVIDIA GPU cards.....

i believe there are OSX friendly tools like ollama that make downloading, and running LLMs quite easy....and of course there’s the latest sensation…OpenClaw, that folks are buying up Mac Minis for....to have multiple agents running using models of your. Choice (commercial clound or local) of models and giving them persistent memory, and ability to do a lot of things for you…depending on how comfortable you are with giving said agents long leashes and capabilities....

Do look a bit on YouTube on these topics....it’s actually quite interesting.

These M chips are already giving the home user the capability to use models almost as large and on the cutting edge as the big companies.....more than enough for most users.

Right now, there’s nothing x86 that can really match them…at least not for the money.

Senators Demand to Know How Much Energy Data Centers Use

Posted by BeauHD View on SlashDot Skip
Elizabeth Warren and Josh Hawley are pressing the Energy Information Administration (EIA) to provide better information on how much electricity data centers actually use. In a joint letter sent to the EIA on Thursday, the two senators press the agency to publicly collect “comprehensive, annual energy-use disclosures” on data centers, saying it’s “essential for accurate grid planning and will support policymaking to prevent large companies from increasing electricity costs for American families.” Wired reports:
In December, EIA administrator Tristan Abbey said at a roundtable that he expects the EIA “is going to be an essential player in providing objective data and analysis to policymakers” with respect to data centers. The agency announced on Wednesday that it would be conducting a voluntary pilot program to collect energy consumption information from nearly 200 companies operating data centers in Texas, Washington, and Virginia, which will cover “energy sources, electricity consumption, site characteristics, server metrics, and cooling systems.”

While the senators praise the EIA pilot program, their letter includes several questions about how the agency plans to move forward with more data collection, such as whether or not the energy surveys will be mandatory and whether or not the EIA will collect information on behind-the-meter power. This information will be especially crucial, the senators say, to make sure that big tech companies that signed the agreement at the White House earlier this month pledging that consumers won’t bear the costs of data center electricity use will stick to their promises. “Without this data, policymakers, utility companies, and local communities are operating in the dark,” the senators write.

The EIA mandates that other industries, including oil and gas and manufacturing, provide regular data to the agency; Hawley and Warren assert that the EIA should be able to collect similar information from data centers under the same provision. The provision is broad enough, Peskoe says, that it could absolutely be interpreted to encompass data centers.
Yesterday, Senator Bernie Sanders and Rep. Alexandria Ocasio-Cortez announced a bill that would “enact a reasonable pause to the development of AI to ensure the safety of humanity.” It calls for a federal moratorium on AI data centers until stronger national safeguards are in place around safety, jobs, privacy, energy costs, and environmental impact.

Cue Carl Sagan:

By Locke2005 • Score: 5, Funny Thread
“Billions and billions!”

Growing opposition

By cosmicl • Score: 5, Insightful Thread
Bernie Sanders has observed that these data centers aren’t at work trying to find ways to provide jobs for the working class, they are creating technology to replace paying people and increase the bottom line of the billionaire class. Whether you like Bernie or hate him, this observation on data centers resonates across party lines. And then there’s the water issue.

Re:Water is what scares me

By Smonster • Score: 5, Informative Thread
Some of us deliberately choose to move to where the water is instead of praying snowpack and rain. I was born in the inland western USA. After a few decades there and visiting other places; I saw the writing on the wall. After decades of decreasing water supplies coupled with irresponsible explosive growth in the Great Basin, Front Range, and SW in particular.its just asking for trouble. The snow pack this year out there is less than 50% of the average of the last century. The fire season is going to be bad. The large reservoirs are already all low. The [no longer] Great Salt Lake is very low. When things eventually hit the fan out there it’s going to make the Dust Bowl climate refugees seem like it was a trickle.

But the governments still encourage sprawl. And they still allow water hunger industries to move in and expand.

It’s going to get ugly. It’s just a matter of when.

Re:I think

By nightflameauto • Score: 4, Insightful Thread

Talk about non-starterz ! The only research you want to do with “data-centers/LLM/*.ai” is … how much C4 ....‘XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX’ deleted for the obvious national security reasons ....

My friend, you have an overactive imagination. How you pulled that out what I wrote is quite creative, but silly and wrong.

My point, and my only point, is that the present concept of huge power sources dedicated to AI data centers is premature. Effective AI is a couple generations early. At present, the paradigm sucks up too much power.

Yes it does. And as much as the AI prophets want us to believe that AI will solve that problem, the only solution they’ve proposed thus far to the “sucks up too much power” problem is build ever bigger datacenters sucking up still more power. That’s not a solution so much as a goalpost transplant on a nearly astronomical scale. There’s a reason people are concerned about it, because we just keep hearing that if we pour enough resources into AI, AI will solve every problem we’ve ever had. Yet, at the moment, it can’t even solve the problems it itself is creating. I don’t think building ever larger datacenters using ever more resources is going to pay off nearly as well as the pushers are trying to convince us it will, and we need to start looking at the situation with a little tiny modicum of that ever encroaching thing called reality.

Re:Water is what scares me

By thegarbz • Score: 5, Insightful Thread

Wow. Could you go off in a wild tangent any further?

There’s nothing “wild tangent” when we’re discussing the resource requirements of datacentre. It’s literally those two, and both are important.

JPMorgan Starts Monitoring Investment Banker Screen Time To Prevent Burnout

Posted by BeauHD View on SlashDot Skip
JPMorgan is piloting a system that monitors junior investment bankers to avoid burnout (source paywalled; alternative source). "[T]he bank will seek to match up hours claimed by the bankers with digital activity,” reports Bloomberg. “The tool won’t be used for evaluation purposes, but is designed to provide a better estimate of employee workloads.” From the report:
The program will monitor the weekly digital footprint, including video calls, desktop keystrokes, and scheduled meetings, the Financial Times reported earlier, adding JPMorgan plans to roll out the effort more widely across its investment bank. Banks on Wall Street are known for heavy working hours, but can in return offer salaries of as much as $200,000 for entry-level analyst and associate roles.
“Much like the weekly screen time summaries on a smartphone, this tool is about awareness — not enforcement,” a representative for JPMorgan said in a statement. “It’s designed to support transparency, well-being, and encourage open conversations about workload.”

Ummm

By Ol Olsoc • Score: 4, Insightful Thread
“awareness” my rosy red rectum. Sounds more like something from 1984, total surveillance of every minute, framed as helping. Doubleplus ungood.

Haha lies

By Tyr07 • Score: 3 Thread

Fine, how about it’s also done anonymously so you have no idea who is doing more or less screen time, and zero hints about who it might be.

Then you can just have open company policies if you see people getting too much screen time and workload and put in policies that reduce it.
Because it’s not about figuring out enforcement, right?

uh

By drinkypoo • Score: 3 Thread

“The tool won’t be used for evaluation purposes, but is designed to provide a better estimate of employee workloads.”

Yeah, specific employees.

Anyway, this is a good point, people can only stare at a screen for so long, unless they’re playing video games. Obviously they need to gamify trading. I mean, more than they already have

Re:Right…

By nightflameauto • Score: 4, Insightful Thread

Exactly! My company rolled out JIRA time tracking and said it was purely to do metrics on time spent on tasks. The employees unionized and the company was trying to get us to accept that they’ll use those time tracking charts as a basis for disciplinary action. If they say “The tool won’t be used for evaluation purposes”, then you can bet it absolutely will be.

The business world is pretty notorious for “whatever we say we won’t do is already part of the plan.” This one is so absurdly obvious I’m surprised they bothered to even say it.

Wrong metrics

By Locke2005 • Score: 3 Thread
I’d be curious how much of that screen time is spent watching porn. Those guys all have awesome internet connections!

Vizio TVs Now Require Walmart Accounts For Smart Features

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from Ars Technica:
Prospective Vizio TV buyers should know there’s a good chance the set won’t work properly without a Walmart account. In an attempt to better serve advertisers, Walmart, which bought Vizio in December 2024, announced this week that select newly purchased Vizio TVs now require a Walmart account for setup and accessing smart TV features. Since 2024, Vizio TVs have required a Vizio account, which a Vizio OS website says is necessary for accessing “exclusive offers, subscription management, and tailored support.” Accounts are also central to Vizio’s business, which is largely driven by ads and tracking tied to its OS.

A Walmart spokesperson confirmed to Ars Technica that Walmart accounts will be mandatory on “select new Vizio OS TVs” for owners to complete onboarding and to use smart TV features. The representative added: “Customers who already have an existing Vizio account are being given the option to merge their Vizio account with their Walmart account. Customers with an existing Vizio account can opt out by deleting their Vizio account.” The representative wouldn’t confirm which TV models are affected. Walmart’s representative said the Walmart account integration is “designed to respect consumer choice and privacy, with data used in aggregated, permissioned, and compliant ways” but didn’t specify how.

Re:Bullshit

By TwistedGreen • Score: 5, Interesting Thread

What, you don’t believe them, given their stellar track record?

https://www.classlawgroup.com/…

Reason #6

By necro81 • Score: 5, Funny Thread
That’s about reason #6 why I won’t be buying a Vizio.

Re:Blessing in disguise?

By hazem • Score: 5, Interesting Thread

The last Vizio I bought wouldn’t let you past the screen-covering EULA without signing in or creating an account. Which is why it went back to the store and it’s the last Vizio will ever buy. It also lacked a sleep button on the remote… and required 8 button presses EACH time you wanted to use the sleep feature.

Years ago, they were my favorite brand of TV, worth paying a bit extra for. Never again. I’m so tired of the enshittification.

This is why I bought hisense

By FictionPimp • Score: 5, Informative Thread

My favorite part of my HiSense TV was turning it on the first time. It asked if I wanted a ‘standard TV’ or ‘Google TV’. I picked standard and that was that. No wifi, no apps, just HDMI ports.

Only going to get worse from here.

By hwstar • Score: 5, Interesting Thread

1. TV’s where if you don’t enable them and keep it connected to the Internet , they don’t turn on the HDMI ports. All you can do is watch OTA TV
2. Cellular modems in TV’s so you can’t bypass ads sent through the cellular modem.
3. All sales final return policy
4. Potted and sealed electronics with tamper grids.
4. DMCA lawsuit if you modify the TV and they find out.

I guess at this point people won’t but TV’s anymore.

Mozilla and Mila Team Up On Open Source AI Push

Posted by BeauHD View on SlashDot Skip
BrianFagioli writes:
Mozilla just teamed up with Mila, the Quebec Artificial Intelligence Institute, to push open source AI — and it feels like a direct response to Big Tech tightening its grip on the space. Instead of relying on closed models, the goal here is to build “sovereign AI” that’s more transparent, privacy-focused, and actually under the control of developers and even governments. They’re starting with things like private memory for AI agents, which sounds niche but matters if you care about where your data goes. Big question is whether open source can realistically keep up with the billions being poured into proprietary AI, but at least someone’s trying to give folks an alternative.
“Canada has what it takes to lead on frontier AI that the world can actually trust: the research depth, the values, and the will to do it differently. The next frontier in AI isn’t just capability, it is trustworthiness, and Canada is uniquely positioned to lead on both. This partnership is a concrete step in that direction. Open, trustworthy AI isn’t a compromise on ambition. It’s the higher bar,” said Valerie Pisano, president and CEO of Mila.

Our last, best hope for peace.

By houstonbofh • Score: 4, Informative Thread
For those of us who have been around long enough to get that quote, Mozilla has been a hell of a ride. A lot of ups and downs, but now that are the only real Chrome alternative and may become the only real AI alternative. Time to think about supporting them again.

Here we go again

By IWantMoreSpamPlease • Score: 3 Thread

AI…whether we want it or not. Thanks Mozilla, you could be spending your money on fixing your browser rather than....this.

Milli Vanilli turned out bogus,

By Tablizer • Score: 3 Thread

so will Mila Mozilla.

Wikipedia Bans Use of Generative AI

Posted by BeauHD View on SlashDot Skip
Wikipedia has banned the use of generative AI to write or rewrite articles, saying it “often violates several of Wikipedia’s core content policies.” That said, editors may still use it for translation or light refinements as long as a human carefully checks the copy for accuracy. Engadget reports:
Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs “can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.” Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.

“My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent,” Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a “pushback against enshittification and the forceful push of AI by so many companies in these last few years.”

Andrew Jackson of the Mind

By Pseudonymous Powers • Score: 4, Insightful Thread

1. Obviously. So very obvious, in fact, that I am surprised to hear that LLMs weren’t already banned several years ago.

2. How are they going to enforce it? There’s a large contingent of alleged humans who get a tingle in their nethers presenting LLM output as their own original thought.

Re:Bye bye Wikipedia

By Noofus • Score: 5, Insightful Thread

If you can’t take a list of bullets and turn it into a paragraph of text, what are you even doing trying to edit anything?

Re:Bye bye Wikipedia

By dgatwood • Score: 5, Insightful Thread

Even on for authors, of encyclopedia articles, and this notihing wrong with telling ChatGTP to, “take this list of bullets and write it up as a paragraph.”

Until it hallucinates and adds something that wasn’t there or changes the meaning significantly. In my experience, AI is really good at screwing things up in ways that nobody expects. And if the people making the changes aren’t subject-matter experts, but are just doing drive-by edits to try to make things more digestible, they might not notice the errors if they are subtle enough. Allowing any random person to do stuff like that could potentially cause a lot of damage really quickly.

Nor is there anything wrong with asking it to make a diagram of some process etc.

Until it steals the chart blatantly from somebody’s published book, and Wikipedia gets sued for copyright infringement. Wikipedia isn’t just trying to protect itself from erroneous data. It’s trying to protect itself from liability. With user-uploaded content, the user can self-certify that they have the right to upload it, and apart from user incompetence, that’s usually going to be good enough. With AI-generated images, it is impossible for a user to know for certain whether what they are uploading is infringing, and would be hard to later prove which AI generated the diagram to transfer the liability to the AI company.

But the biggest risk, IMO, would be asking it to make a chart with numbers from some table. It could manipulate the numbers, and if someone isn’t checking closely, they might not see the error, but the incorrect chart could easily mislead people. AI-based chart generation seems way more likely to introduce errors than a human copying and pasting the table into a spreadsheet and generating the chart with traditional non-AI-based tools.

Someone else is going to clone wikipedia and the authorship will no doubt migrate to where they are allowed to use contemporary tooling.

And after a few months, people will complain that the content is constantly wrong, the editors over there will give up trying to keep the error rate under control, and anyone with a clue will come running back to Wikipedia.

Re:Bye bye Wikipedia

By Geoffrey.landis • Score: 5, Insightful Thread

Wikipedia is choosing to die. There is a lot wrong with a lot of what people are doing with GenAI but it is also super useful.

Unfortunately, even the best LLMs sometimes make up information (“hallucinate”), and the stuff they make up is deliberately crafted to appear exactly like real information. This is simply unacceptable for an encyclopedia.

If Wikipedia were written by paid professionals, you could plausibly put in place protocols to check and verify, and fire the ones who fail to check properly, but even paid professionals have been seen to let hallucinations through. As it is, as an encyclopedia that it is put together by volunteers, forbidding AI is pretty much a forced choice.

https://www.evidentlyai.com/bl…
  https://arize.com/llm-hallucin…
  https://thisweekinsciencenews....

Good for them.

By eriks • Score: 3 Thread

And stop calling ii “Generative” — it doesn’t generate anything — at best, it’s “reflective” in that it reflects back whatever was put into it. It’s still GIGO.

Tracy Kidder, Author of ‘The Soul of a New Machine’, Dies At 80

Posted by BeauHD View on SlashDot Skip
Ancient Slashdot reader wiredog writes:
Tracy Kidder, author of "The Soul of a New Machine,” has died at the age of 80. “The Soul of a New Machine” is about the people who designed and built the Data General Nova, one of the 32 bit superminis that were released in the 1980’s just before the PC destroyed that industry. It was excerpted in The Atlantic.

“I’m going to a commune in Vermont and will deal with no unit of time shorter than a season.”

Great book, partially why I am a programmer

By greytree • Score: 5, Informative Thread
I loved that book! It romanticized computers and programming and partially inspired my career.

RIP, Mr Kidder.

I thought it was the DG Eclipse

By Z00L00K • Score: 5, Informative Thread

I thought it was the development of the DG Eclipse that the book was about.

In any case it was a great story, with the machines named Coke and Gollum. Originally the idea was Coke and Pepsi, but one of the machines was temperemental so it got renamed.

Winner of the Pulitzer

By necro81 • Score: 4, Interesting Thread
Soul of a New Machine is a really fun book from the standpoint of the technology and culture of the time. But let’s not forget it was widely regarded as just awesome writing: it won the Pulitzer and the National Book Award for nonfiction.

Tracey Kidder also wrote Mountains Beyond Mountains, about Dr. Paul Farmer and the work of his medical non-profit Partners In Health. Another excellent read.

+1

By JBMcB • Score: 5, Informative Thread
Novas were 16-bit machines. I know because there are 16 select toggles on the front of mine :)

Soul of a New Machine was about the development of the MV line, which was the 32-bit extension of the Eclipse line, which was an extension (virtual memory, multitasking, etc.) of the Nova line. Similar to how VAXes were based on the PDP-11 architecture.

‘Soul’ is utterly briliant

By EightBells • Score: 4, Insightful Thread
I had been working for a large timeshare company for several years before “Soul of a New Machine” was published. At the risk of being immodest, every page could have been written about my experiences and collegues, at a fantastic workplace filled with brilliant people who I will never forget. Sorry to hear of your swapout, Ms Kidder.

China Reviews $2 Billion Manus Sale To Meta As Founders Barred From Leaving Country

Posted by BeauHD View on SlashDot
Chinese authorities have barred two Manus executives from leaving the country while investigating whether Meta’s reported $2 billion acquisition of the Singapore-based AI startup violated foreign investment reporting rules. “Manus was founded in China but last year relocated its headquarters and core team to Singapore,” notes the Financial Times. “Meta acquired it for $2 billion at the end of last year.” The Financial Times reports:
Manus’s chief executive Xiao Hong and chief scientist Ji Yichao were summoned to a meeting in Beijing with the National Development and Reform Commission this month, according to three people with knowledge of the matter. They said Xiao and Ji were questioned on potential violations of foreign direct investment rules related to its onshore Chinese entities.

After the meeting, the Singapore-based executives were told they were not allowed to leave China because of a regulatory review, while they remain free to travel within the country, two of the people said. No formal investigation has been opened and no charges have been brought. Manus is actively seeking law firms and consultancies to help resolve the matter, said a person with knowledge of the move.

Manus

By Valgrus Thunderaxe • Score: 3 Thread
I had to look this up:

Manus is the action engine that goes beyond answers to execute tasks, automate workflows, and extend your human reach.

Once I read this corporate drivel I decided I don’t care. When all these C-types are in the board room sitting around that mahogany table, do they actually speak this way between themselves or is this style of speech strictly for gullible investors and an intellectually lazy public?

Re:Take the loss, Cathay.

By DamnOregonian • Score: 4 Thread
Relocating your headquarters does not unincorporate your incorporated entities within the jurisdiction in question, and that’s the trouble they’re in now. They have many onshore (Chinese) holdings.

An Irish corporation doing business in the US via a US-incorporated entity is subject to US law. The people within that parent corporation are very subject to US law if they step foot on US soil, and mostly subject to US law if they step foot on the soil of any country with an extradition treaty with the US.