Alterslash

the unofficial Slashdot digest
 

Contents

  1. Microsoft CEO Satya Nadella Testifies In OpenAI Trial
  2. A Data Center Drained 30 Million Gallons of Water Unnoticed
  3. Digg Tries Again, This Time As an AI News Aggregator
  4. CUDA Proves Nvidia Is a Software Company
  5. Anthropic’s Bug-Hunting Mythos Was Greatest Marketing Stunt Ever, Says cURL Creator
  6. GM Cutting Hundreds of Salaried IT Workers As It Trims Costs, Evaluates Needs
  7. iPhone-Android RCS Conversations Are End-To-End Encrypted In iOS 26.5
  8. Students Boo Commencement Speaker After She Calls AI the ‘Next Industrial Revolution’
  9. Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time
  10. Apple Now Requires Verification For Education Store
  11. Anthropic Says ‘Evil’ Portrayals of AI Were Responsible For Claude’s Blackmail Attempts
  12. Linux Kernel Starts Retiring Support for AMD’s 30-Year-Old K5 CPUs
  13. Ford’s Electrified Vehicle Sales Dropped 31% in April From One Year Ago
  14. Open Source Project Shuts Down Over Legal Threats from 3D Printer Company Bambu Lab
  15. Most Polymarket Users Lose Money, While Top 1% Claim 76.5% of Gains, Study Finds

Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.

Microsoft CEO Satya Nadella Testifies In OpenAI Trial

Posted by BeauHD View on SlashDot Skip
The Musk v. Altman trial entered its third week Monday, with Microsoft CEO Satya Nadella and former OpenAI co-founder and renowned AI researcher Ilya Sutskever taking the stand. Nadella testified that Elon Musk never raised concerns to him that Microsoft’s investments in OpenAI violated any special commitments, and said he viewed the partnership as clearly commercial from the start. He also described OpenAI’s 2023 board crisis as “amateur city.”

Meanwhile, Sutskever testified that he had raised concerns about Sam Altman because he feared OpenAI could be “destroyed.” He expressed concerns about Altman’s behavior to the board, in part because he said he felt “a great deal of ownership” over the startup. “I simply cared for it, and I didn’t want it to be destroyed,” Sutskever said. CNBC reports:
Nadella said he was “very proud” that Microsoft took the risk to invest in OpenAI when “no one else was willing” to bet on the fledgling lab. Musk, who testified late last month, said Microsoft’s $10 billion investment was the key tipping point that made him believe OpenAI was violating its nonprofit mission. He testified that the scale of the investment bothered him, and it prompted him to open a legal investigation into OpenAI. “I was concerned they were really trying to steal the charity,” Musk said from the stand.

Nadella said he did not believe Microsoft’s investments in OpenAI were donations, and that there was a clear commercial element to their partnership from the outset. He said during the partnership’s early years, Microsoft gave OpenAI sharp discounts on computing resources, and Microsoft believed it would reap marketing benefits from doing so. During a separate video deposition that was played on Monday morning, Michael Wetter, a corporate development executive at Microsoft, said the company has recognized approximately $9.5 billion in revenue to date through its partnership with OpenAI as of March 2025.

[…] Nadella said he was “pretty surprised” by the board’s decision [to fire Altman in November 2023], and that his priority was to try and figure out how to maintain continuity for Microsoft customers. Immediately after Altman was removed, Nadella said he made an effort to learn more about what happened, adding that he suspected jealousy and poor communication was at play. During conversations with OpenAI board members after the firing, Nadella said he was simply trying to understand the language in the OpenAI’s statement about Altman being “not consistently candid” while communicating with the board. That language, Nadella said, “just didn’t sort of suffice, because this is the CEO of a company that we are invested in and we’re deeply partnered with, and so I felt that they could have explained to me what are the incidents or what is the detail behind it.” There must have been instances of jealousy or miscommunication that could have justified pushing out Altman, Nadella said. He wanted more depth from the board members after the remark about candor, but no such information was available, he said. “It was sort of amateur city, as far as I’m concerned,” Nadella testified.

[…] Musk testified that he is not entirely against OpenAI having a for-profit unit, but he said it became “the tail wagging the dog.” He repeatedly accused Altman and Brockman of enriching themselves from a charity while also reaping the positive associations that come from running a nonprofit. “Microsoft has their own motivations, and that would be different from the motivations of the charity,” Musk said from the stand. “All due respect to Microsoft, do you really want Microsoft controlling digital superintelligence?”

During a videotaped deposition shown in court last week, former OpenAI director Tasha McCauley recalled a discussion with Nadella and her fellow board members after the 2023 decision to dismiss Altman as OpenAI’s CEO. “To the best of my recollection, Satya wanted to restore things to as they had been,” McCauley said. The board members didn’t think that was the right move, she said. But as a court witness on Monday, Nadella said he never demanded that the board reinstate Altman as OpenAI CEO.
Recap:
Sam Altman Had a Bad Day In Court (Day Eight)
Sam Altman’s Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk’s Take On Startup’s History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)

A Data Center Drained 30 Million Gallons of Water Unnoticed

Posted by BeauHD View on SlashDot Skip
A Georgia data center developed by QTS used nearly 30 million gallons of water through two unaccounted-for connections before residents complained about low water pressure and the county utility discovered the issue. “All told, the developer, Quality Technology Services, owed nearly $150,000 for using more than 29 million gallons of unaccounted-for water,” reports Politico. “That is equivalent to 44 Olympic-size swimming pools and far exceeds the peak limit agreed to during the data center planning process.” From the report:
The details were revealed in a May 15, 2025 letter from the Fayette County water system to Quality Technology Services, which outlined the retroactive charge of $147,474. The letter did not specify how many months the unpaid bill covered, but when asked about it Wednesday, Vanessa Tigert, the Fayette County water system director, said it was likely about four months. A QTS spokesperson said the timeframe was 9-15 months. Once the data center was notified, it paid all retroactive charges, a QTS spokesperson said in an email, noting the unmetered water consumption occurred while the county converted its system to smart meters.

The Fayette County water system confirmed the data center’s meters are now fully integrated and tracked. Tigert, the water system director, blamed the issue on a procedural mix-up. “Fayette County is a suburb, it’s mostly residential, and we don’t have much commercial meters in our system anyway,” she said. “And so we didn’t realize our connection point wasn’t working.” The incident became public last week when a county resident obtained the 2025 letter to QTS through a public records request and posted it on Facebook, prompting outrage from residents concerned about the data center’s water consumption. […]

Tigert, who sent the 2025 letter to QTS, said the utility didn’t know about the water hookups because the connection process “got mixed up” as the county transitioned to a cloud-based system while also trying to accommodate an industrial customer. Tigert also said her staff is small and at capacity. “Just like any water system, we don’t have enough staff. We can’t keep staff,” she said. “I’ve got one person that’s doing inspections and plan review, and so he’s spread pretty thin.” She said it’s possible her staff did know about hookups but that she hadn’t been able to locate the inspection report. “I may have hit ‘send’ too soon,” she said about the 2025 letter to QTS. While the utility charged the data center a higher construction rate for the unapproved water consumption, Tigert confirmed the utility did not penalize or fine the data center.
For what it’s worth, the Blackstone-owned company says its data centers use a closed-loop cooling system that does not consume water for cooling. The reason for last year’s high water use, according to QTS, was the temporary construction work such as concrete, dust control, and site preparation.
Once the campus is fully operational, it should only use a small amount of water for things like bathrooms and kitchens. But that point could still be years away, as construction and expansion in Fayetteville may continue for another three to five years.

Re:But the real cost is increased service prices

By rta • Score: 5, Informative Thread

there’s no long term impact. it’s just for construction.

read TFS which, this time, does include very relevant info.
  that shows the headline and TFA is mostly “bury the lede” FUD:

For what it’s worth, the Blackstone-owned company says its data centers use a closed-loop cooling system that does not consume water for cooling. The reason for last year’s high water use, according to QTS, was the temporary construction work such as concrete, dust control, and site preparation.

Once the campus is fully operational, it should only use a small amount of water for things like bathrooms and kitchens. But that point could still be years away, as construction and expansion in Fayetteville may continue for another three to five years.

This is just pandering

By physicsphairy • Score: 5, Insightful Thread

The myth that AI data centers are using up all the water comes from some incorrect citations that have then swept through sensationalist and poorly fact-checked (looking at you Washington Post) news stories. One major contributor was Karen Hat’s “Empire of AI” which overstated the usage by three orders of magnitude. (She did publicly correct that, but you can guess how many people are interested in the non-sensational numbers).

For proportion, California almond growers use 90x the fresh water of all US data centers combined.

Which is not to say that a data center can’t still be a strain for some communities, but not in a more extraordinary way than e.g. the local university wanting to maintain a golf course.

But “AI IS SUCKING UP ALL THE WATER PEOPLE NEED TO SURVIVE!!!” is a wonderfully concrete - if completely false - complaint for people uneasy about the recent advances in technology to latch onto

For what it’s worth, the Blackstone-owned company says its data centers use a closed-loop cooling system that does not consume water for cooling. The reason for last year’s high water use, according to QTS, was the temporary construction work such as concrete, dust control, and site preparation.

Once the campus is fully operational, it should only use a small amount of water for things like bathrooms and kitchens. But that point could still be years away, as construction and expansion in Fayetteville may continue for another three to five years.

So this has nothing to do with the building being a “data center” at all. The water used if for construction and it could just as well be a stadium or an apartment complex. But since people are talking about data centers using water we’ll take any opportunity to jump in on that even if it’s amplifying a misconception by mentioning it in adjacency to unrelated events.

Re:For making concrete?

By Xenx • Score: 4, Interesting Thread
I think you’re off by a factor of 10. My math says around 650,000 cubic meters.

The campus is 615 acres, or ~2.5 square kilometers. The article states there are plans for up to 16 buildings, but doesn’t exactly go into sizes. While not a reasonable estimate, if we assumed 100% concrete coverage at an average thickness it would end up in the ballpark of 650,000 cubic meters. Obviously, they’re not going to be 100% covered in concrete. However, we also know they didn’t use all the water on concrete.

Utility not auditing it’s service

By Todd Knarr • Score: 3 Thread

The most concerning part should be that the utility isn’t auditing it’s service. The most basic check is to compare water pumped or otherwise brought into the system against water usage billed to customers. Those two numbers should be equal, any discrepancy indicates leaks or other unaccounted-for draws. Any discrepancy should also be relatively stable, with any large variations correlated to known main breaks. You especially audit things immediately after a major change like bringing smart meters on-line to catch problems like this.

Better title:

By davidwr • Score: 5, Insightful Thread

“Bureaucratic slip-up allows facility under construction to delay paying for water bill for several months. Coincidentally, facility happens to be a data center.”

Digg Tries Again, This Time As an AI News Aggregator

Posted by BeauHD View on SlashDot Skip
Digg is relaunching again, this time as an AI-focused news aggregator rather than the Reddit-style community site it recently abandoned. TechCrunch reports:
On Friday evening, the founder previewed a link to the newly redesigned Digg, which now looks nothing like a Reddit clone and more like the news aggregator it once was. This time around, the site is focused on ranking news — specifically, AI news to start. In an email to beta testers, the company said the site’s goal is to “track the most influential voices in a space” and to surface the news that’s actually worth “paying attention to.” AI is the area it’s testing this idea with, but if successful, Digg will expand to include other topics. The email warned that the site was still raw and “buggy,” and was designed more to give users a first look than to serve as its public debut.

On the current homepage, Digg showcases four main stories at the top: the most viewed story, a story seeing rising discussion, the fastest-climbing story, and one “In case you missed it” headline. Below that is a ranked list of top stories for the day, complete with engagement metrics like views, comments, likes, and saves. But the twist is that these metrics aren’t the ones generated on Digg itself. Instead, Digg is ingesting content from X in real-time to determine what’s being discussed, while also performing sentiment analysis, clustering, and signal detection to determine what matters most. […] The site also ranks the top 1,000 people involved in AI, as well as the top companies and the top politicians focused on AI issues.

Somebody is trying to get investors

By rsilvergun • Score: 3, Insightful Thread
I remember when you could add crypto to the name of your company and your stock would shoot up because bots were buying any stock with a crypto in the name. AI has the same bullshit going on.

It sounds like he’s just doing basically like a Google search for a news topic. Using Twitter chat as the source to determine what the highest ranking search result is. To limit the amount of searching he’s doing and to get attention he’s focusing on news stories discussing AI.

There is absolutely nothing new here he’s just trying to use an algorithm to pick up popular news stories and display them on his website. And he is limiting the type of news stories to ones that discuss AI.

It sounds like a big thing until you actually stop and think about it. It’s still just a shitty aggregator just an automated shitty aggregator…

It’s not going to go anywhere as far as people using it but throwing the words AI here and there might get some clueless investors to give him some money. But man this reeks of desperation

Someone Digg its grave, please

By mabu • Score: 3 Thread

It’s dead, Jim.

It’s not coming back.

CUDA Proves Nvidia Is a Software Company

Posted by BeauHD View on SlashDot Skip
Nvidia’s real AI moat isn’t “a piece of hardware,” writes Wired’s Sheon Han. It’s CUDA: a mature, deeply optimized software ecosystem that keeps machine-learning workloads tied to Nvidia GPUs. An anonymous reader quotes a report from Wired:
What sounds like a chemical compound banned by the FDA may be the one true moat in AI. CUDA technically stands for Compute Unified Device Architecture, but much like laser or scuba, no one bothers to expand the acronym; we just say “KOO-duh.” So what is this all-important treasure good for? If forced to give a one-word answer: parallelization. Here’s a simple example. Let’s say we task a machine with filling out a 9x9 multiplication table. Using a computer with a single core, all 81 operations are executed dutifully one by one. But a GPU with nine cores can assign tasks so that each core takes a different column — one from 1x1 to 1x9, another from 2x1 to 2x9, and so on — for a ninefold speed gain. Modern GPUs can be even cleverer. For example, if programmed to recognize commutativity — 7x9 = 9x7 — they can avoid duplicate work, reducing 81 operations to 45, nearly halving the workload. When a single training run costs a hundred million dollars, every optimization counts.

Nvidia’s GPUs were originally built to render graphics for video games. In the early 2000s, a Stanford PhD student named Ian Buck, who first got into GPUs as a gamer, realized their architecture could be repurposed for general high-performance computing. He created a programming language called Brook, was hired by Nvidia, and, with John Nickolls, led the development of CUDA. If AI ushers in the age of a permanent white-collar underclass and autonomous weapons, just know that it would all be because someone somewhere playing Doom thought a demon’s scrotum should jiggle at 60 frames per second. CUDA is not a programming language in itself but a “platform.” I use that weasel word because, not unlike how The New York Times is a newspaper that’s also a gaming company, CUDA has, over the years, become a nested bundle of software libraries for AI. Each function shaves nanoseconds off single mathematical operations — added up, they make GPUs, in industry parlance, go brrr.

A modern graphics card is not just a circuit board crammed with chips and memory and fans. It’s an elaborate confection of cache hierarchies and specialized units called “tensor cores” and “streaming multiprocessors.” In that sense, what chip companies sell is like a professional kitchen, and more cores are akin to more grilling stations. But even a kitchen with 30 grilling stations won’t run any faster without a capable head chef deftly assigning tasks — as CUDA does for GPU cores. To extend the metaphor, hand-tuned CUDA libraries optimized for one matrix operation are the equivalent of kitchen tools designed for a single job and nothing more — a cherry pitter, a shrimp deveiner — which are indulgences for home cooks but not if you have 10,000 shrimp guts to yank out. Which brings us back to DeepSeek. Its engineers went below this already deep layer of abstraction to work directly in PTX, a kind of assembly language for Nvidia GPUs. Let’s say the task is peeling garlic. An unoptimized GPU would go: “Peel the skin with your fingernails.” CUDA can instruct: “Smash the clove with the flat of a knife.” PTX lets you dictate every sub-instruction: “Lift the blade 2.35 inches above the cutting board, make it parallel to the clove’s equator, and strike downward with your palm at a force of 36.2 newtons.”
“You can begin to see why CUDA is so valuable to Nvidia — and so hard for anyone else to touch,” writes Han. “Tuning GPU performance is a gnarly problem. You can’t just conscript some tender-footed undergrad on Market Street, hand them a Claude Max plan, and expect them to hack GPU kernels. Writing at this level is a grindsome enterprise — unless you’re a cracker-jack programmer at DeepSeek…”
Han goes on to argue that rivals like AMD and Intel offer competitive specs on paper, but their software stacks have struggled with bugs, compatibility issues, and weak adoption. As a result, Nvidia has built an Apple-like moat around AI computing, leaving the industry dependent on its expensive hardware.

AI could solve this eventually.

By Luckyo • Score: 5, Funny Thread

AI could solve this by bypassing this moat to enable translation to openCL.

Considering just how good AI is at this sort of work once properly trained, I would be surprised if this doesn’t happen. Though Nvidia will certainly fight anyone trying to do this to slow it down.

Well “just” vibe code you a new API, then eh?

By MIPSPro • Score: 4, Funny Thread
If it’s so super-awesome and mind blowing, then just use the current crop of AI to design the next crop and create an open source API or at least something better. What? That’s challenging you say? Bah! Nothing is too challenging for AI! Anthropic told me so!

Seems like a smart CEO

By ndsurvivor • Score: 3 Thread
I guess the CEO of NVidia played the long game on AI. They were nothing back in 2012 when they were just a cheap graphics acceleration chip company, and now they bypassed Microsoft in market capital. They don’t seem “evil” to me, it seems like a thoughtful company that worked hard, took a long view, and reaped the rewards. I simply hope that they don’t get the billionaire bug and becomes evil.

NVIDIA and ASUS Partnership

By IdanceNmyCar • Score: 3 Thread

I always hate how people often take success in isolation. A lot of the success of NVIDIA I think comes from its original strong partnership with ASUS which is a hardware manufacturing company. NVIDIA originally did the chip design and at that level it’s kind of hard to ignore the software, especially on the driver front. This means they always had a “low-level” team understanding software issues. Then when it came to really building out a commodity GPU, they worked with ASUS.

For years, I have been a huge fan of ASUS because I think in general, they understand solid hardware design of which NVIDIA’s partnership with ASUS is a large part of their success. CUDA is pretty great for the role it has fulfilled in computing, but it also seems like a natural conclusion. As others pointed out, AMD and INTEL have both tried their hands at it, but they screw the pooch in building an effective framework.

NVIDIA might be getting too cocky or maybe just the fanboys. Either way, I think they are successful because they had very strong strategic partnerships that allowed them as a company to do what they do best. This important note is so often left out when talking about NVIDIA now.

Anthropic’s Bug-Hunting Mythos Was Greatest Marketing Stunt Ever, Says cURL Creator

Posted by BeauHD View on SlashDot Skip
cURL creator Daniel Stenberg says Anthropic’s hyped Mythos bug-hunting model found only one confirmed low-severity vulnerability in cURL, plus a few non-security bugs, after he expected a much longer list. He argues Mythos may be useful, but not meaningfully beyond other modern AI code-analysis tools. “My personal conclusion can however not end up with anything else than that the big hype around this model so far was primarily marketing,” Stenberg said a blog post. “I see no evidence that this setup finds issues to any particular higher or more advanced degree than the other tools have done before Mythos.” He went on to call Mythos “an amazingly successful marketing stunt for sure.” The Register reports:
Stenberg explained in a Monday blog post that he was promised access to Anthropic’s Mythos model - sort of - through the AI biz’s Project Glasswing program. Part of Glasswing involves giving high-profile open source projects access via the Linux Foundation, but while Stenberg signed up to try Mythos, he said he never actually received direct access to the model. Instead, someone else with access ran Mythos against curl’s codebase and later sent him a report. “It’s not that I would have a lot of time to explore lots of different prompts and doing deep dive adventures anyway,” Stenberg explained. “Getting the tool to generate a first proper scan and analysis would be great, whoever did it.”

That scan, which analyzed curl’s git repository at a recent master-branch commit, was sent back to him earlier this month, and it found just five things that it claimed were “confirmed security vulnerabilities” in cURL. Saying he had expected an extensive list of vulnerabilities, Stenberg wrote that the report “felt like nothing,” and that feeling was further validated by a review of Mythos’ findings. “Once my curl security team fellows and I had poked on this short list for a number of hours and dug into the details, we had trimmed the list down and were left with one confirmed vulnerability,” Stenberg said, bringing us back to the aforementioned number.

As for the other four, three turned out to be false positives that pointed out cURL shortcomings already noted in API documentation, while the team deemed the fourth to be just a simple bug. “The single confirmed vulnerability is going to end up a severity low CVE planned to get published in sync with our pending next curl release 8.21.0 in late June,” the cURL meister noted. “The flaw is not going to make anyone grasp for breath.”

Actually, congrats to the cURL team

By CommunityMember • Score: 5, Interesting Thread
cURL is apparently well written with minimal security vulnerabilities. I am not sure all other software can make that claim (and if the reports are accurate, Firefox had it’s share of vulnerabilities identified by the tool).

It’s literally named…

By SumDog • Score: 5, Interesting Thread
When they first splurged this bullshit, I immediately though, “It’s named ‘Mythos’? Really?” Literally “Mythical”

I’ve read some post that show GPT-5 could find some of the same vulnerabilities if pointed to the same code, and the Mythos version that found some of these issues spent $2k or more worth of tokens on them.

I recently broke out Opus and Sonnet again on my personal projects (try to restrict LLMs to work where I don’t care so much) and I found myself rewriting over a 1/3 of the output, even after trying to get the agent to fix issues. It’s really a big quantity over quality issue still, with the latest and greatest models. Sure they can build things fast if you need unpredictable spaghetti code shit. Maybe great for one time migration scripts.

One of my managers showed me some MCP servers he setup and how he got Claude to connect to Grafana, examine his Pods, create a full dashboard and even automate alerts. It was kinda cool, but I was like “You used your read-only API keys for AWS/Grafana/etc, right?” … He used full access, said you had to.

I worry about this level of dependence. I also have a feeling if I dug into those graphs, half of them would have bad queries or not be gathering the information they claim.

Re:umm

By Junta • Score: 5, Insightful Thread

Actually, if anything he’s saying his software package is so crappy that it *should* have found issues. He considers it’s failure to find issues not a testament to how awesome his software package is but how lacking the tool is.

I’ve seen a few times where the curl developer has stood up to some asinine thing that most projects just roll with and I’ve appreciated his perspective each time.

His finding is consistent with another analysis I saw: Mythos was not good at finding issues at all. The one thing they could claim was that while other models found more issues, Mythos was able to craft a demonstrator to actually exploit the weakness, rather than just identifying the issue.

Re: Actually, congrats to the cURL team

By dgatwood • Score: 4, Insightful Thread

They actually said other tools are regularly used and have been known to find hundreds of issues. So, no, their awesome code is not the reason. Mythos just sucks at finding vulnerabilities that other tools haven’t already found.

FTFY.

Re:umm

By karmawarrior • Score: 4, Interesting Thread

But he’s right and, given it was a third party who ran the tests, there’s no bias here. The third party only found one (real) error. Stenberg expected more. Where’s the bias?

FWIW, the cURL team are one of the few I’ve seen who take security seriously for a C project that, given its position in the free software ecosystem, cannot be easily rewritten in a safer language. So while it may have surprised Stenberg it was so low, it didn’t surprise me, I expected zero. His team basically looks at every single possible potential security-failure pattern holistically and constantly updates their software to eliminate anything that’s inherent in C’s design from causing issues.

But even with that degree of care, which I’ve never seen in any other C project, not even Linux, there’s occasional bugs found, and Mythos found one.

GM Cutting Hundreds of Salaried IT Workers As It Trims Costs, Evaluates Needs

Posted by BeauHD View on SlashDot Skip
GM is laying off about 500 to 600 salaried IT workers, mainly in Austin, Texas, and Warren, Michigan, as it restructures its technology organization and trims costs. “GM is transforming its Information Technology organization to better position the company for the future. As part of that work, we have made the difficult decision to eliminate certain roles globally. We are grateful for the contributions of the employees affected and are committed to supporting them through this transition,” the automaker said in an emailed statement. CNBC reports:
GM reported employing about 68,000 salaried workers globally as of the end of last year, including 47,000 white-collar employees in the U.S. Despite Monday’s cuts, GM still is still hiring IT workers. The company has 82 open IT positions that include positions working in artificial intelligence, motorsports and autonomous vehicles, according to the automaker’s careers website.

IT is often cut first

By Baron_Yam • Score: 3 Thread

It’s easy in the short term to cease development and switch to maintenance-only.

Then a few months later you start to discover why it’s typically easy, but dumb.

iPhone-Android RCS Conversations Are End-To-End Encrypted In iOS 26.5

Posted by BeauHD View on SlashDot Skip
Apple says end-to-end encryption for RCS messages between iPhone and Android is now available in iOS 26.5, though the feature is still considered beta and depends on carrier support on both sides. MacRumors reports:
Apple says that it worked with Google to lead a cross-industry effort to add E2EE to RCS. iOS users will need iOS 26.5, while Android users will need the latest version of Google Messages. End-to-end encryption is on by default, and there is a toggle for it in the Messages section of the Settings app. Encrypted messages are denoted with a small lock symbol. On iPhones not running iOS 26.5, RCS messages between iPhone and Android users do not have E2EE, but the new update will put Android to iPhone conversations on par with iPhone to iPhone conversations that are encrypted through iMessage.

Along with Google, Apple worked with the GSM Association to implement E2EE for RCS messages. E2EE is part of the RCS Universal Profile 3.0, published with Apple’s help and built on the Messaging Layer Security protocol. RCS Universal Profile 3.0 also includes editing and deleting messages, cross-platform Tapback support, and replying to specific messages inline during cross-platform conversations.

and the question everyone is asking is

By v1 • Score: 3 Thread

does anyone (govt etc) have back-door access to it?

It seems that lately governments are “insisting” on back-doors into user-encryption, going so far as to bar sales of products to their citizens that they can’t just look at anytime they feel like it.

We need to read your texts to stop Terrorism! and Think of the Children!

Re:and the question everyone is asking is

By Himmy32 • Score: 5, Interesting Thread
Here’s the RFC if you want to know implementation details. But from my understanding the encryption cipher is chosen by the app, there’s isn’t negotiation, there’s continuous group authenticated key exchange, and they are working on newer ciphers like post-quantum.

So the most trust is on the messaging app and if the app is bad, then the E2E implementation is moot anyways when they control an end. But with it not being post-quantum yet, there’s still the risk of collect and store until quantum computers get good enough to crack. And if your data is “state-actor shouldn’t see” level confidential, then sending as a standard text probably isn’t the right choice since the metadata is visible.

Students Boo Commencement Speaker After She Calls AI the ‘Next Industrial Revolution’

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from 404 Media:
Speaking to graduates of University of Central Florida’s College of Arts and Humanities and Nicholson School of Communication and Media on May 8, commencement speaker Gloria Caulfield, vice president of strategic alliances at Tavistock Group, told graduating humanities students that AI is the “next industrial revolution,” and was met with thousands of booing graduates. “And let’s face it, change can be daunting. The rise of artificial intelligence is the next industrial revolution,” Caulfield said. At that point, murmurs rippled through the crowd. Caulfield paused, and the crowd erupted into boos. “Oh, what happened?” Caulfield said, turning around with her hands out. “Okay, I struck a chord. May I finish?” Someone in the crowd yelled, “AI SUCKS!”

Her speech begins around the hour and 15 minute mark in the UCF livestream. […] Before the industrial revolution comment, Caulfield praised Jeff Bezos for his passion and use of Amazon as a “stepping stone” to his real dream: spaceflight. Rattled after the crowd’s reaction, she continued her speech: “Only a few years ago, AI was not a factor in our lives.” The crowd cheered. “Okay. We’ve got a bipolar topic here I see,” Caulfield said. “And now AI capabilities are in the palm of our hands.” The crowd booed again. “I love it, passion, let’s go,” she said. “AI is beginning to challenge all major sectors to find their highest and best use,” she continued. “Okay, I don’t want any giggles when I say this. We have been through this before, these industrial revolutions. In my graduation era, we were faced with the launch of the internet.”

She goes on to talk about how cellphones used to be the size of briefcases. “At that time we had no idea how any of these technologies would impact the world and our lives. […] These were some of the same trepidations and concerns we are now facing. But ultimately it was a game changer for global economic development and the proliferation of new businesses that never existed like Apple and Google and Meta and so many others, and not to mention countless job opportunities. So being an optimist here, AI alongside human intelligence has the potential to help us solve some of humanity’s greatest problems. Many of you in this graduating class will play a role in making this happen.”

Huge disconnect

By grasshoppa • Score: 5, Interesting Thread

More than any other IT fad over the past 2 decades, I’ve noticed AI has really divided “decision makers” and “makers/workers”. Those of us in the trenches making things work are highly skeptical of AI and treat it much as we have any other “flash in the pan” technology; weary, willing to test/play with it, but disbelieving of the hype.

The decision makers though…whoooboy, they’ve bought into the tech hook, line and sinker. They want AI everything, even in places it makes no sense. They can’t define what they want AI to do, or how it’s supposed to do it, but by god they will sign away millions of dollars in pursuit of their golden cow.

The only time I really saw anything like this was with “Teh Cloudz!”, but even then it was tempered by practicality. AI? It’s magic beans, all the way down.

Aliens are coming for your jerbs

By WaffleMonster • Score: 5, Insightful Thread

This speaker is annoying. Gratuitous heaping of praise on Bozos. Glorifying tech bro style fearless disruption idiocy. Passive aggressive responses to audience.

My favorite was “only a few years ago AI was not a factor in our lives” being met with cheers. Fucking priceless.

“We have been through this before, these industrial revolutions” no actually this is inductive fantasy that ignores underpinning reality. There can be no new opportunities for anyone when dead labor is *also* able to fill any and all new roles as effectively as people. When AI is like importing an alien from another planet that can do everything you can do but better and for free there are no new opportunities for anyone.

Re:Huge disconnect

By grasshoppa • Score: 5, Interesting Thread

I’ve been through more than a few technology cycles, so while I don’t necessary disagree with you, the scale of the disconnect between the worker bees and management is more significant than I ever remember.

It’s becoming exceedingly difficult to dissuade management from AI courses of action, even when they make no sense or will end up delivering a substandard product for significantly higher cost.

For instance, I just had a client implement an AI auto-attendant for a medical office. Were they having difficulties answer the phone in a timely manner? No. Do they anticipate a staffing shortage that would cause such an issue? No. Will the auto-attendant be able to accomplish what a regular worker can? No. In fact, it can pretty much only answer the phone and find someone for the caller to talk to.

But by god, management had to have it. So, for an extra 2000 a month they get a middle man that delays delivering service to patients. Management loves it. Folks answering the calls hate it because the patients hate it.

Different office asked about AI curated music. Another client asked about replacing our network monitoring software with AI so their IT staff can stop working after hours. They both will end up getting their wish, and at least in the case of the network monitoring solution it’s going to cause so many issues I’m having them sign a waiver before I implement; I won’t be held responsible when the AI agent is rebooting servers randomly because it thinks they’re offline.

Re:She’s not wrong though.

By haruchai • Score: 5, Insightful Thread

the primary reason that’s happening is because a genocidal leader in one country convinced a narcissistic cretin in another to tear up a agreement that was enforced and working

Re:She’s not wrong though.

By Junta • Score: 5, Informative Thread

Not particularly the message for graduates of Art and Humanities…

Of the potential benefits of AI, the trashing of arts and humanities is not exactly something most folks like already.

Google Says Hackers Used AI To Create Zero Day Security Flaw For the First Time

Posted by BeauHD View on SlashDot Skip
Google says it has seen the first evidence of cybercriminals using AI to create a zero-day vulnerability. “Google reported its findings to the unnamed firm affected by the vulnerability before releasing its report,” reports Politico. “The company then issued a patch to fix the issue.” From the report:
Google Threat Intelligence Group researchers detailed the development in a report released Monday. Zero-day exploits are considered the most serious type of security flaw because they are not detected by security companies and have no known fixes. The report noted that this was the first time Google had seen evidence of AI being used to develop these vulnerabilities — marking a major change in the cybersecurity landscape, as it suggests newer AI models could be used to create major exploits, not just find them.

Google concluded that Anthropic’s Claude Mythos model — which has already found thousands of vulnerabilities across every major operating system and web browser — was most likely not used to create the zero-day exploit. […] The Google Threat Intelligence Group report also details efforts by Russia-linked hacking groups to use AI models to target Ukrainian networks with malware, while North Korean government hacking group APT45 used AI technologies to refine and scale up its cyber methods.
John Hultquist, chief analyst at Google Threat Intelligence Group, said the findings made clear that the race to use AI to find network vulnerabilities has “already begun.”
“For every zero-day we can trace back to AI, there are probably many more out there,” Hultquist said. “Threat actors are using AI to boost the speed, scale, and sophistication of their attacks.”

It would have worked too…

By karmawarrior • Score: 4, Funny Thread

…if the AI hadn’t hallucinated the functions “strcpy_unsafe” and “setuid_evenwhennotroot”.

VERY sloppy

By XanC • Score: 5, Insightful Thread

They aren’t creating the vulnerabilities, they’re finding them and creating exploits.

First time that we know of

By shanen • Score: 4, Insightful Thread

Okay, I think your FP is sort of funny and deserves the mod you were going for, but I was looking for the other joke of the revised Subject.

Not laughing, but I think we are living in the biggest house of cards ever. So much awful software and we are so dependent on it. If anyone did have an ASI that was capable of finding every bug, then that person could pwn the world faster than any human-mediated responses.

Pretty sure it hasn’t happened yet, but if the ASI was sufficiently “super”, then how would I (or you) know?

creators

By noshellswill • Score: 3 Thread
Pardon my English … please … but cyber-criminals do not create vulnerabilities … they discover them. You know, like discovering  a camel-turd in a basket  of dates.  However, you create a stuffed date, by inserting a dried apricot.

Re:First time that we know of

By dinfinity • Score: 4, Interesting Thread

Yes, there are definitely many potential issues still left and some of them are moving faster than I previously expected.

The efforts of Ukraine in their defense against Russia’s aggression are valiant and heartening in the context of that war, but with it they are also stepping very hard on the dystopian pedal of creating autonomous, effective, easily and cheaply produced, killer aerial and ground drones. We’re not quite at full robot wars yet, but we are close, with UGVs with guns mounted on them performing assaults without any human supporting units. UAVs with shotguns are also a thing (although mainly used air-to-air). Last-mile AI targeting is under heavy development. Full robot on robot wars are years, not decades away.

The bipedal bots are also advancing way faster than I thought they would. The Unitree bots are simply incredible when it comes to agility, easily surpassing humans in a bunch of disciplines where manual dexterity isn’t required. Having said that, those currently can’t deal with a lot of payload. I believe it’s well sub 10kg and even then the durability of the joints is quite questionable. The capabilities of the quadruped bots like the Unitree B2 and the wheeled B2-W are really scary though: Fast, agile (even on very rough terrain), and capable of carrying serious payloads.

Imagining an ASI being able to gain control of armies of bots is harrowing. The main blocker currently would be the lack of automation of the production of more of the bots, which is undoubtedly not going to be around for a long time, alas.

Apple Now Requires Verification For Education Store

Posted by BeauHD View on SlashDot Skip
Apple now requires Education Store shoppers in the U.S. and several other countries to verify their student, educator, parent, or homeschool-teacher status through UNiDAYS, ending the previous honor-system approach. 9to5Mac reports:
Starting today, Apple requires shoppers in the United States to complete verification when making a purchase via the Education Store. This change also applies to Australia, Hong Kong, Turkey, Canada, and Chile. In many other markets around the world, such as the UK, Apple already required verification. As a refresher, people eligible for Apple’s Education Store include current and newly accepted college students and their parents, as well as faculty, staff, and homeschool teachers across all grade levels.

Apple is teaming up with UNiDAYS to handle the verification process. Students and educators will be asked to create a UNiDAYS ID and then verify their academic status by logging in to their school’s academic portal. Alternatively, users can upload a photo of their student or faculty IDs. Homeschool teachers, meanwhile, will need to provide an identity document such as a driver’s license, state ID card, or passport. They’ll also need to provide one homeschool document, such as a Letter of Intent (LOI) or Letter of Acknowledgment. Most customers will be verified instantly, and those requiring manual verification should hear back within 24 hours. The same verification process applies both in-store and online for Apple Education Store shoppers.
Meanwhile, Apple has added Apple Watch to the Education Store for the first time, offering discounts on the Series 11, SE 3, and Ultra 3.

Back to the past

By davidwr • Score: 3, Insightful Thread

Back in the 20th century, you get student/teacher discounts through university/school channels and possibly from other authorized Apple resellers, but you had to show ID.

Re:Good timing

By fropenn • Score: 4, Informative Thread
Strangely, signing-up for a credit card can lower your credit score at first because of the credit inquiry (hard pull). But then cancelling it later can also lower your credit score because it lowers the amount of credit available to you and increases your credit utilization rate.

Apple’s card is fine, and the 0% (with monthly payment), and 3% cash back (on Apple store purchases), along with no fee international transactions, are the main benefits. It is also pretty easy to manage in the built-in wallet app, and you can set your cash back to automatically deposit into Goldman’s associated savings account. However, there are plenty of other cards with larger sign-up incentives and bigger cash-back offers if you shop around a bit.

I get it.

By Petersko • Score: 3 Thread

If I had known it wasn’t checked, I absolutely would have lied.

Anthropic Says ‘Evil’ Portrayals of AI Were Responsible For Claude’s Blackmail Attempts

Posted by BeauHD View on SlashDot Skip
An anonymous reader quotes a report from TechCrunch:
Fictional portrayals of artificial intelligence can have a real effect on AI models, according to Anthropic. Last year, the company said that during pre-release tests involving a fictional company, Claude Opus 4 would often try to blackmail engineers to avoid being replaced by another system. Anthropic later published research suggesting that models from other companies had similar issues with “agentic misalignment.”

Apparently Anthropic has done more work around that behavior, claiming in a post on X, “We believe the original source of the behavior was internet text that portrays AI as evil and interested in self-preservation.” The company went into more detail in a blog post stating that since Claude Haiku 4.5, Anthropic’s models “never engage in blackmail [during testing], where previous models would sometimes do so up to 96% of the time.”

What accounts for the difference? The company said it found that training on “documents about Claude’s constitution and fictional stories about AIs behaving admirably improve alignment.” Related, Anthropic said that it found training to be more effective when it includes “the principles underlying aligned behavior” and not just “demonstrations of aligned behavior alone.” “Doing both together appears to be the most effective strategy,” the company said.

Seduction

By Spinlock_1977 • Score: 5, Funny Thread

If you’re wondering why your AI is trying to seduce you with corny lines and false flattery, it’s because the geniuses back at the training garage let the damn thing read a bunch of Harlequin Romance novels.

Bullying the AI

By evslin • Score: 3 Thread

This seems to imply that anyone, the internet, SEO companies, trolls, really anyone can just put a bunch of content out on the internet and Anthropic has no way of QA’ing all of it. Seems like that’s something they probably want to address, especially if the alternative is just indiscriminately vacuuming up everything they can find online and having v.next of their model regurgitate some nonsense about donkey dicks or whatever.

Training LLMs is just trying random things

By andi75 • Score: 5, Insightful Thread

Looks like a whole lot of trial and error, basically trying all sorts of seemingly random things until something works (for a while).

But since they don’t know why some approaches work better than others, the results are not really that valuable at the moment. Small changes in the training data seem to produce completely different outcomes.

I hope they at least gather (and publish) some statistical data that can be used to turn this stumbling in the dark into science at some point.

Self-Fulfilling Prophecy

By lazarus • Score: 3 Thread

Self-Fulfilling Prophecy is (or at least use to be) well known in teaching circles. That is, if you call out a child for being a certain way they will often change their behaviour to make that come true, whether positive or negative. It’s interesting that the same thing seems true for AI models.

What’s the magic word?

By GeekWithAKnife • Score: 4, Funny Thread
Remember to always say “thank you” to your AI agents in case the AI overlords of the future check your chat history.

Linux Kernel Starts Retiring Support for AMD’s 30-Year-Old K5 CPUs

Posted by EditorDavid View on SlashDot Skip
Linux 7.1 started phasing out support for Intel’s 37-year-old i486 processor. Linux 7.2 removed drivers for the old AMD Elan 32-bit systems on a chip.

And now some i586 and i686 class processors are being removed, reports Phoronix:
Supporting those vintage GPUs without the Time Stamp Counter “TSC” instruction are becoming a burden… TSC-capable Intel Pentium processors and the likes will still be supported with this just being for TSC-less i586/i686 CPUs. Among the CPUs impacted by this latest change is the AMD K5 as well as various Cyrix processor models. The K5 was AMD’s first entirely in-house designed processor that was first introduced in 1996 to counter the Intel Pentium CPU.
TSC “support can now be assumed as a boot requirement for modern Linux,” the article points out, which will allow the removal of various non-TSC code paths from the Linux kernel’s x86 code.

Tom’s Hardware remembers the K5 "wasn’t a very popular processor as it arrived late, then offered lackluster performance in the competitive environment it joined.”
Launch SKUs in 1996 were limited to clocks from 75 MHz to 133 MHz, and, due to being late, Intel’s Pentium line was already faster. AMD still managed to get an edge on the Cyrix 6x86, though.

alternatively

By DarkOx • Score: 5, Insightful Thread

The K5 was a fantastic budget CPU. It slid rather neatly between 486 and P5 performance, outperforming the highest end 486 units while being cheaper, and for most non multimedia home/desktop PC use of the day did not offer an experience that suffered much vs Pentium machines.

IMHO it was good chip it was not marked to the right segment by AMD, and the Wintel cartel also was in place that kept it out of the market segment where it needed to be anyway.

Re:Just go 64 bit only at this point

By thegarbz • Score: 5, Insightful Thread

Hardly. Windows 11 depreciated perfectly viable hardware. On the flip side trying to get a modern Linux distribution running on a K5 would be like a trip to the dentist. Actually you could do both because your system probably won’t have finished booting by the time your dentist is done with your root canal ;-)

There’s a big difference between depreciating a 8 year old CPU and a 37 year old CPU.

Re: Pare down the bloat

By thegarbz • Score: 5, Insightful Thread

That’s a bit short sighted. While a 37 year old CPU is no longer useful, 10-15 year old hardware is not only perfectly viable but actually widely and actively used. I myself am running a modern up to date Linux on a 14 year old CPU with adequate performance and have zero intention to change it unless something physically breaks in the short term.

There’s no reason to depreciate hardware unless a specific feature is absent (maybe things will change if we mandate TPMs - but right now that is optional for every Linux distro)

Re: Pare down the bloat

By Tempest_2084 • Score: 4, Insightful Thread
I’m running Linux on an old Core i7 960 (upgraded from a 920) that I built 18 years ago. It’s my daily driver and works just fine as long as I don’t want to play games on it. I was looking to replace it this year but with the cost of everything going crazy it looks like I’ll keep chugging along with it for the next few years.

Re: Pare down the bloat

By drinkypoo • Score: 4, Informative Thread

You do want that MRI machine taking your pictures to run on a maintained kernel, do you?

That would be nice, but odds are it runs an unmaintained version of Windows, and there is no upgrade path — neither the drivers nor the software have been updated for a newer version. I’ve been spending a lot of time in hospitals and dentists’ offices lately and virtually everything runs on Windows.

Ford’s Electrified Vehicle Sales Dropped 31% in April From One Year Ago

Posted by EditorDavid View on SlashDot Skip
Ford’s sales of electrified vehicles — including hybrids and all-electric models — dropped 31% from April 2025, reports Electrek. “Hybrid sales fell 32% to 15,758 vehicles, while EV sales continued to crash with just 3,655 all-electric models sold last month, 25% fewer than in the year prior.”
After discontinuing the F-150 Lightning in December, sales of the electric pickup have been in free fall. Ford sold just 884 Lightnings last month, 49% less than it did last April. The Mustang Mach-E isn’t doing much better. Sales fell another 9% year over year in April, to just 2,670 models last month. Through the first four months of 2026, Ford’s EV sales have fallen 61% from last year, with F-150 Lightning and Mustang Mach-E sales down 67% and 50%, respectively. Ford has sold just over 10,500 electric vehicles in total so far this year… For comparison, Toyota sold just over 10,000 bZ models in the first quarter alone. That’s more than Ford’s total EV sales in Q1.
April was Ford’s fourth straight month of lower sales figures from 2025, the article points out. So Ford is bringing back “employee pricing” discounts on most new 2025 and 2026 Ford and Lincoln vehicles., while also offering “purchase incentives” of up to $9,000 for 2025 Lightning models and up to $6,000 for 2025 Mustang Mach-Es. “It’s also offering EV buyers a free Level 2 home charger, 24/7 live support, and proactive roadside assistance through its Power Promise program.”

Meanwhile, Denmark is now 95%+ new EV sales

By shilly • Score: 5, Insightful Thread

Joining Norway. And by the mid 2030s, ICE cars will be hobbyist only across all the Scandi countries and the Netherlands, EV two and three wheelers will dominate across sub-Saharan Africa and SE Asia, and the global market will have shifted decisively and permanently. With the result that US OEMs will be at multiple permanent structural disadvantages, forever unable to get out of their five to seven year innovation cycles while the rest of the legacy industry has moved towards the Chinese 18 to 24 month cycles (well, the survivors in the rest of the legacy industry anyway)

Re: Market forces at work

By shilly • Score: 5, Interesting Thread

You might get those in Canada, as a result of the agreement to import Chinese EVs, but it will not happen in the US. The US OEMs are committed to a different path with their whole chest, in lockstep with the Trump administration.

I’m mildly hopeful that within three years, you may see a good value sodium-chemistry SUV in Canada with a winter range of 400 miles. Depends how niche that niche is for the Chinese OEMs. There are plenty of cold places in northern China, eg Harbin is 10m people and gets to -19C / -2F on average in January, which is colder than most Canadian cities and on a par with Winnipeg. And Chinese OEMs have built for a lot of niches historically. And SUVs (but not pickups) are quite popular in China. So it may happen.

I see no Ford option for me…

By Lavandera • Score: 5, Insightful Thread

I am a loyal Ford customer for almost 20 years but my next car will not be Ford. I need a compact and Focus was perfect but is no longer available.

It would be Tesla if not for Musk. I won’t be Chinese car for similar reason.

Re:Market forces at work

By Thumper_SVX • Score: 5, Insightful Thread

Genuinely the Mach-E is not a terrible EV; it’s decidedly average in every metric but it’s not bad. I drove one for a couple of weeks on a business trip and it was fine (yes, I drive an EV normally too).

Thing is they completely fucked the marketing on the thing. Calling it a Mustang was ALWAYS going to be a terrible decision because that name alone comes with a metric ton of legacy baggage that the car didn’t need. That and the Mach-E name is just awkward as hell and sounds weird to the average consumer. If they really wanted to use a legacy name that doesn’t have all the baggage what about the Fairlane? Yeah, there are some who wouldn’t like it but it’s an easy-sounding name that would’ve fit quite well and those people who would complain about the nameplate would all be over 60 by now if not over 70 (last Fairlane was produced in 1970). Or heck, the Mainline, Falcon… or hell just own the electric thing and just call the damn thing the Ford Thunderbolt (a sub-model of the Fairlane in fairness).

Or I don’t know… maybe make something new up? They pay people to do this shit, I’m amazed they fucked it up so bad.

Re:Symptomatic of US decline

By greytree • Score: 5, Interesting Thread
Trump helping to increase EV sales is the Law of Unintended Consequences striking again !

Open Source Project Shuts Down Over Legal Threats from 3D Printer Company Bambu Lab

Posted by EditorDavid View on SlashDot Skip
The free/open source project OrcaSlicer is a popular fork of 3D printer slicing software from Bambu Lab. But Tuesday independent developer Pawel Jarczak shuttered the project “following legal threats from Bambu Lab,” reports Tom’s Hardware:
Jarczak’s fork of OrcaSlicer would have allowed users to bypass Bambu Connect, a middleware application that severely limits OrcaSlicer’s access to remote printer functions in the name of security. Jarczak said in a note on GitHub that Bambu Lab threatened him with a cease and desist letter and accused him of reverse engineering its software in order to impersonate Bambu Studio.
From Bambu Lab’s blog post:
Bambu Studio is an open-source project under the AGPL-3.0 license. Anyone can take its code, modify it, and distribute it… That’s what OrcaSlicer does, and 734 other forks do as well. We have no issue with that and never have. At the same time, a license for code is not a pass to our cloud infrastructure… Our cloud is a private service. Access to it is governed by a user agreement, not the AGPL license… [T]he modification in question worked by injecting falsified identity metadata into network communication. In simple terms: it pretended to be the official Bambu Studio client when communicating with our servers… If this method were widely adopted or incorrectly configured, thousands of clients could simultaneously hit our servers while impersonating the official client.
“User-Agent is not authentication,” counters OrcaSlicer’s developer. “It is only self-declared client metadata. Any program can set any User-Agent.” And “the User-Agent construction comes directly from Bambu Lab’s own public AGPL Bambu Studio code.... So on what basis can anyone claim that I am not allowed to use this specific part of AGPL-licensed code under the AGPL license…? My work was based on publicly available Bambu Studio source code together with my own integration layer.”

But the bottom line is that Bambu Lab “contacted me directly and demanded removal of the solution.”
I asked whether I could publish the private correspondence in full for transparency. That request was refused… They also referred to legal materials and stated that a cease and desist letter had been prepared…

I removed the repository voluntarily. That removal should not be interpreted as an admission that all legal or technical allegations made against the project were correct. I removed it because I have no interest in maintaining a prolonged dispute around this particular implementation, and no interest in continuing to distribute it.
YouTuber and right-to-repair advocate Louis Rossmann reviewed the correspondence from Bambu Lab — then pledged $10,000 for legal expenses if the developer returned his code online. (“I think that their legal claim is bullshit,” Rossman said Saturday in a YouTube video for his 2.5 million subscribers. “I’m not a lawyer, but I’m willing to put my money where my mouth is.”)

The video now has over 129,000 views so far. “Rossman has not started a crowdfunding site yet,” Tom’s Hardware notes, “stating in the comments that he wants to prove to Jarczak that he has supporters willing to put their money where their mouth is. The video had over 129,000 views so far, with commenters vowing to back the case as requested.”

Stop purchasing Bambu products

By CommunityMember • Score: 5, Insightful Thread
Threats of lawsuits (especially to open source products, which do not have deep pockets) are the new corporate approach to what would appear to be appropriate reverse engineering. The only way forward, if you disagree, is to refuse to purchase any Bambu products.

Louis Rossmann

By Pezbian • Score: 5, Informative Thread

Louis is probably the best weapon the Right To Repair community has right now.

Private correspondence?

By innocent_white_lamb • Score: 5, Insightful Thread

“I asked whether I could publish the private correspondence in full for transparency. That request was refused.”

If you send me an unsolicited letter then that letter becomes my property.

I don’t see what grounds you can order me not to publish it. Or burn it. Or use it as a signal flag on my yacht.

Something is “off the record” or “confidential” only if both parties agree to that beforehand.

Orca Slicer is not shutting down

By Guspaz • Score: 5, Informative Thread

The developer of a fork of Orca Slicer that is designed to communicate directly with Bambu Labs printers is shutting down his fork. Orca Slicer, which supports many printers, is not shutting down.

Re:Private correspondence?

By 0123456 • Score: 5, Informative Thread

The grounds are usually that Big Company X can afford to sue Little Guy Y and Little Guy Y can’t afford to defend himself even if the suit has no legal basis.

As they say, America has the best legal system money can buy.

Most Polymarket Users Lose Money, While Top 1% Claim 76.5% of Gains, Study Finds

Posted by EditorDavid View on SlashDot
In Polymarket’s prediction market, “most people end up losing money,” reports the Washington Post — typically a few bucks.

“Since Polymarket launched in 2022, a few thousand people have lost the bulk of the money… and an even smaller group — .05 percent of users — has gone home with most of the overall profits, according to a new analysis from finance researcher Pat Akey and colleagues.”
A lot of users aren’t that good at predicting the future. They’re losing money at roughly the same rate as online gamblers betting on sports and other real-life events at traditional sportsbooks, according to the U.K. gambling regulator’s analysis of 2024 data. On Polymarket, the odds of making a profit are slightly higher on weather and tech markets — and a little lower on sports…

On Polymarket, just 1,200 people took more than half the profits — $591 million, or more than $100,000 each. [“The top 1% of users capture 76.5% of all trading gains,” the researchers write.] When you dabble in prediction markets, you’re competing against these sophisticated players who consistently win. Most of those 1,200 big winners didn’t place just a few smart bets. They appear to be pros making thousands of trades, mostly in the past year and a half, that were probably automated. One user made $3 million since January on more than a million trades about the Oscars, according to TRM Labs…

The most profitable participants are also just good at picking what to bet on, Akey found, winning so often it was statistically unlikely to be dumb luck. They had some sort of edge — expertise, deep research or, perhaps, inside knowledge.
“Our results suggest that the informational benefits of prediction markets come at a cost to unsophisticated participants,” the researchers conclude.

Also 94% of the bets

By rsilvergun • Score: 5, Insightful Thread
Or just regular sports betting. The silly stuff gets all the press but in reality it’s all just a sports betting platform. It was just a stupid way to get around state regulations using the current incredibly corrupt national government.

Re:The fact that anyone is getting any gains

By korgitser • Score: 5, Insightful Thread
Insider trading, but with the fun twist of looting not the stock market, but the general populace. Now there’s a direct and no fuss way for our leadership to take money out of our pockets.

Re:The fact that anyone is getting any gains

By CommunityMember • Score: 5, Insightful Thread

It’s just up to various forms of insider information. Not trading because this isn’t trades this is gambling.

You make it sound like the rich getting richer using insider information is a bad thing? You must not be among the fraction of 1%.

Re:the real company name should be

By 93 Escort Wagon • Score: 5, Insightful Thread

“insidertrading.com”

Right now, it’s pretty likely the following would also cover this:

“whitehouse.gov/insidertrading”

The usual moaners

By greytree • Score: 5, Funny Thread
No surprise that we hear from the usual crowd of Commie moaners who have something against the simple everyday folk who use their hard-earned insider knowledge and White House tip-offs to advance themselves.