Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.
SaaS Apocalypse Could Be OpenSource’s Greatest Opportunity
Longtime Slashdot reader internet-redstar writes:
Nearly a trillion dollars has been wiped from software stocks in 2026, with hedge funds making billions shorting Salesforce, HubSpot, and Atlassian. At FOSDEM 2026, cURL maintainer Daniel Stenberg shut down his bug bounty program after AI-generated slop overwhelmed his team. A new article on HackerNoon argues that most commercial SaaS could inevitably become OpenSource, not out of ideology but economics. The author points to Proxmox replacing VMware at enterprise scale and startups like Holosign replicating DocuSign at $19/month flat as evidence. The catch, the article claims, is that maintainers who refuse to embrace AI tools risk being forked, or simply replicated from scratch, by those who do.
2026 Turing Award Goes To Inventors of Quantum Cryptography
Dave Knott shares a report from the New York Times:
On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, said Drs. Charles Bennett and Gilles Brassard had won this year’s Turing Award for their work on quantum cryptography and related technologies. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the two scientists will share.
[…] The two met in 1979 while swimming in the Atlantic just off the north shore of Puerto Rico. They were taking a break while attending an academic conference in San Juan. Dr. Bennett swam up to Dr. Brassard and suggested they use quantum mechanics to create a bank note that could never be forged. Collaborating between Montreal and New York, they applied Dr. Bennett’s idea to subway tokens rather than bank notes. In a research paper published in 1983, they showed that their quantum subway tokens could never be forged, even if someone managed to steal the subway turnstile housing the elaborate hardware needed to read them.
This led to quantum cryptography. After describing their new form of encryption in a research paper published in 1984, they demonstrated the technology with a physical experiment five years later. Called BB84, their system used photons — particles of light — to create encryption keys used to lock and unlock digital data. Thanks to the laws of quantum mechanics, the behavior of a photon changes if someone looks at it. This means that if anyone tries to steal the keys, he or she will leave a telltale sign of the attempted theft — a bit like breaking the seal on an aspirin bottle.
Federal Cyber Experts Called Microsoft’s Cloud ‘a Pile of Shit’, Yet Approved It Anyway
ProPublica reports that federal cybersecurity reviewers had serious, yearslong concerns about Microsoft’s GCC High cloud offering, yet they approved it anyway because the product was already deeply embedded across government. As one member of the team put it: “The package is a pile of shit.” From the report:
In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings. The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica. For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security.
Such judgments would be damning for any company seeking to sell its wares to the U.S. government, but it should have been particularly devastating for Microsoft. The tech giant’s products had been at the heart of two major cybersecurity attacks against the U.S. in three years. In one, Russian hackers exploited a weakness to steal sensitive data from a number of federal agencies, including the National Nuclear Security Administration. In the other, Chinese hackers infiltrated the email accounts of a Cabinet member and other senior government officials. The federal government could be further exposed if it couldn’t verify the cybersecurity of Microsoft’s Government Community Cloud High, a suite of cloud-based services intended to safeguard some of the nation’s most sensitive information.
Yet, in a highly unusual move that still reverberates across Washington, the Federal Risk and Authorization Management Program, or FedRAMP, authorized the product anyway, bestowing what amounts to the federal government’s cybersecurity seal of approval. FedRAMP’s ruling — which included a kind of “buyer beware” notice to any federal agency considering GCC High — helped Microsoft expand a government business empire worth billions of dollars. “BOOM SHAKA LAKA,” Richard Wakeman, one of the company’s chief security architects, boasted in an online forum, celebrating the milestone with a meme of Leonardo DiCaprio in “The Wolf of Wall Street.”
It was not the type of outcome that federal policymakers envisioned a decade and a half ago when they embraced the cloud revolution and created FedRAMP to help safeguard the government’s cybersecurity. The program’s layers of review, which included an assessment by outside experts, were supposed to ensure that service providers like Microsoft could be entrusted with the government’s secrets. But ProPublica’s investigation — drawn from internal FedRAMP memos, logs, emails, meeting minutes, and interviews with seven former and current government employees and contractors — found breakdowns at every juncture of that process. It also found a remarkable deference to Microsoft, even as the company’s products and practices were central to two of the most damaging cyberattacks ever carried out against the government.
Apple Can Delist Apps ‘With Or Without Cause,’ Judge Says In Loss For Musi App
An anonymous reader quotes a report from Ars Technica:
Musi, a free music streaming app that had tens of millions of iPhone downloads and garnered plenty of controversy over its method of acquiring music, has lost an attempt to get back on Apple’s App Store. A federal judge dismissed Musi’s lawsuit against Apple with prejudice and sanctioned Musi’s lawyers for “mak[ing] up facts to fill the perceived gaps in Musi’s case.”
Musi built a streaming service without striking its own deals with copyright holders. It did so by playing music from YouTube, writing in its 2024 lawsuit against Apple that “the Musi app plays or displays content based on the user’s own interactions with YouTube and enhances the user experience via Musi’s proprietary technology.” Musi’s app displayed its own ads but let users remove them for a one-time fee of $5.99. Musi claimed it complied with YouTube’s terms, but Apple removed it from the App Store in September 2024. Musi does not offer an Android app. Musi alleged that Apple delisted its app based on “unsubstantiated” intellectual property claims from YouTube and that Apple violated its own Developer Program License Agreement (DPLA) by delisting the app.
Musi was handed a resounding defeat yesterday in two rulings from US District Judge Eumi Lee in the Northern District of California. Lee found that Apple can remove apps “with or without cause,” as stipulated in the developer agreement. Lee wrote (PDF): “The plain language of the DPLA governs because it is clear and explicit: Apple may ‘cease marketing, offering, and allowing download by end-users of the [Musi app] at any time, with or without cause, by providing notice of termination.’ Based on this language, Apple had the right to cease offering the Musi app without cause if Apple provided notice to Musi. The complaint alleges, and Musi does not dispute, that Apple gave Musi the required notice. Therefore, Apple’s decision to remove the Musi app from the App Store did not breach the DPLA.”
Experiments Show Potatoes Can Survive In Lunar Solar (With Lots of Help)
sciencehabit shares a report from Science.org:
In The Martian, fictional astronaut Mark Watney survives the wasteland of Mars by growing potatoes in lunar soil — with a bit of help from human poop. The idea may not be so far-fetched. In a preprint posted this month on bioRxiv, researchers show potatoes can indeed grow in the equivalent of Moon dust, though they need a lot of help from compost found on Earth. To make the discovery, scientists first had to re-create lunar regolith — the loose, powdery layer that blankets the Moon’s surface. To replicate that in the lab, David Handy, a space biologist at Oregon State University (OSU), and his colleagues used a mix of crushed minerals and volcanic ash that matched the chemistry of the Moon.
But lunar regolith is entirely devoid of the organic matter that plants need to grow. “Turning an inorganic, inhospitable bucket of glorified sand into something that can support plant growth is complex,” says Anna-Lisa Paul, a plant molecular biologist at the University of Florida not involved with the work. So Handy and his colleagues added vermicompost — organic waste from worms — into the regolith. They found that a mix with 5% compost allowed the potatoes to grow while still emulating the stressful conditions of the lunar environment. After almost 2 months of growth, the team harvested the tubers, freeze-dried them, and ground them up for further testing.
Analysis of the potatoes’ DNA showed stress-related genes had been activated. The potatoes also had higher concentrations of copper and zinc than Earth-grown ones, which may make them dangerous for human consumption. The plants’ nutritional value, though, was similar to traditional potatoes — a surprise to the scientists, who expected lower levels of nutrition “because the plants might have been working overtime to overcome certain stressors,” Handy says.
Nvidia Announces Vera Rubin Space-1 Chip System For Orbital AI Data Centers
Nvidia unveiled its Vera Rubin Space-1 system for powering AI workloads in orbital data centers. “Space computing, the final frontier, has arrived,” said CEO Jensen Huang. “As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated.” CNBC reports:
In a press release, the company said that its Vera Rubin Space-1 Module, which includes the IGX Thor and Jetson Orin, will be used on space missions led by multiple companies. The chips are specifically “engineered for size-, weight- and power-constrained environments.” Partners include Axiom Space, Starcloud and Planet.
Huang said Nvidia is working with partners on a new computer for orbital data centers, but there are still engineering hurdles to overcome. “In space, there’s no convection, there’s just radiation,” Huang said during his GTC keynote, “and so we have to figure out how to cool these systems out in space, but we’ve got lots of great engineers working on it.”
AI Job Loss Research Ignores How AI Is Utterly Destroying the Internet
An anonymous reader quotes a report from 404 Media, written by Jason Koebler:
Over the last few months, various academics and AI companies have attempted to predict how artificial intelligence is going to impact the labor market. These studies, including a high-profile paper published by Anthropic earlier this month, largely try to take the things AI is good at, or could be good at, and match them to existing job categories and job tasks. But the papers ignore some of the most impactful and most common uses of AI today: AI porn and AI slop.
Anthropic’s paper, called "Labor market impacts of AI: A new measure and early evidence,” essentially attempts to find 1:1 correlations between tasks that people do today at their jobs and things people are using Claude for. The researchers also try to predict if a job’s tasks “are theoretically possible with AI,” which resulted in this chart, which has gone somewhat viral and was included in a newsletter by MSNOW’s Phillip Bump and threaded about by tech journalist Christopher Mims. (Because everything is terrible, the research is now also feeding into a gambling website where you can see the apparent odds of having your job replaced by AI.) In his thread, Mims makes the case that the “theoretical capability” of AI to do different jobs in different sectors is totally made up, and that this chart basically means nothing. Mims makes a good and fair observation: The nature of the many, many studies that attempt to predict which people are going to lose their jobs to AI are all flawed because the inputs must be guessed, to some degree.
But I believe most of these studies are flawed in a deeper way: They do not take into account how people are actually using AI, though Anthropic claims that that is exactly what it is doing. “We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily,” the researchers write. This is based in part on the “Anthropic Economic Index,” which was introduced in an extremely long paper published in January that tries to catalog all the high-minded uses of AI in specific work-related contexts. These uses include “Complete humanities and social science academic assignments across multiple disciplines,” “Draft and revise professional workplace correspondence and business communications,” and “Build, debug, and customize web applications and websites.” Not included in any of Anthropic’s research are extremely popular uses of AI such as “create AI porn” and “create AI slop and spam.” These uses are destroying discoverability on the internet, cause cascading societal and economic harms.
“Anthropic’s research continues a time-honored tradition by AI companies who want to highlight the ‘good’ uses of AI that show up in their marketing materials while ignoring the world-destroying applications that people actually use it for,” argues Koebler. “Meanwhile, as we have repeatedly shown, huge parts of social media websites and Google search results have been overtaken by AI slop. Chatbots themselves have killed traffic to lots of websites that were once able to rely on ad revenue to employ people, so on and so forth…”
“This is all to say that these studies about the economic impacts of AI are ignoring a hugely important piece of context: AI is eating and breaking the internet and social media,” writes Koebler, in closing. “We are moving from a many-to-many publishing environment that created untold millions of jobs and businesses towards a system where AI tools can easily overwhelm human-created websites, businesses, art, writing, videos, and human activity on the internet. What’s happening may be too chaotic, messy, and unpleasant for AI companies to want to reckon with, but to ignore it entirely is malpractice.”
Arizona Charges Kalshi With Illegal Gambling Operation
Arizona has filed criminal charges against Kalshi, accusing it of operating an illegal gambling business. “Kalshi may brand itself as a ‘prediction market,’ but what it’s actually doing is running an illegal gambling operation and taking bets on Arizona elections, both of which violate Arizona law,” Arizona Attorney General Kris Mayes said in a statement. The case could ultimately head to the Supreme Court to decide whether federal oversight by the Commodity Futures Trading Commission overrides state gambling laws. Bloomberg reports:
While state regulators have taken steps to crack down on what they say is unlicensed betting on Kalshi’s site, Arizona appears to be the first state to escalate to criminal charges. The charges cited in the complaint are misdemeanors, which carry less serious penalties than felonies. […] Prediction market exchanges like Kalshi have said they should continue to be regulated by the US Commodity Futures Trading Commission despite opposition from some state officials, who argue the trading should come under state gambling laws.
Arizona’s criminal complaint follows Kalshi’s move last week to block the state’s gaming department from taking enforcement action against the company. “These are the first criminal charges of any kind filed against Kalshi in any court in the United States, but it will likely be the first of several,” said Daniel Wallach, a sports and gaming attorney.
Rural Ohioans Seek To Ban Data Centers Through Constitutional Amendment
Residents in rural Ohio are pushing a constitutional amendment to ban large data centers over 25 megawatts, citing concerns about energy use, water consumption, and lack of transparency around proposed projects. “My biggest concern is because I love Adams County,” Nikki Gerber told Cleveland.com. “What it feels like they are doing is just taking advantage of the unzoned rural areas of Ohio, where they can go ahead and put in whatever they want.” From the report:
Gerber and a handful of residents from Adams and Brown counties gathered about 1,800 signatures in eight days to start the ballot process. They submitted those petitions to the Ohio attorney general’s office on Monday. That’s the first step before supporters can begin collecting signatures statewide.
State law requires at least 1,000 valid voter signatures to begin the process. The petitions must also include the full text of the proposed amendment and a summary explaining what it would do. Attorney General Dave Yost’s office now has 10 days to decide whether the summary fairly and truthfully describes the proposal. If it does, the measure will move to the Ohio Ballot Board. Supporters would then need to gather about 413,000 valid signatures by July to place the amendment before voters this November.
The report notes that a 25-megawatt limit “would effectively block most modern data centers from being built in Ohio.”
Gamers React With Overwhelming Disgust To DLSS 5’s Generative AI Glow-Ups
Kyle Orland writes via Ars Technica:
Since deep-learning super-sampling (DLSS) launched on 2018’s RTX 2080 cards, gamers have been generally bullish on the technology as a way to effectively use machine-learning upscaling techniques to increase resolutions or juice frame rates in games. With yesterday’s tease of the upcoming DLSS 5, though, Nvidia has crossed a line from mere upscaling into complete lighting and texture overhauls influenced by “generative AI.” The result is a bland, uncanny gloss that has received an instant and overwhelmingly negative reaction from large swaths of gamers and the industry at large.
While previous DLSS releases rendered upscaled frames or created entirely new ones to smooth out gaps, Nvidia calls DLSS 5 — which it plans to launch in Autumn — “a real-time neural rendering model” that can “deliver a new level of photoreal computer graphics previously only achieved in Hollywood visual effects.” Nvidia CEO Jensen Huang said explicitly that the technology melds “generative AI” with “handcrafted rendering” for “a dramatic leap in visual realism while preserving the control artists need for creative expression.”
Unlike existing generative video models, which Nvidia notes are “difficult to precisely control and often lack predictability,” DLSS 5 uses a game’s internal color and motion vectors “to infuse the scene with photoreal lighting and materials that are anchored to source 3D content and consistent from frame to frame.” That underlying game data helps the system “understand complex scene semantics such as characters, hair, fabric and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast,” the company says.
Nvidia’s announcement video and detailed Digital Foundry breakdown can be found at their respective links.
“Reactions have compared the effect to air-brushed pornography, 'yassified, looks-maxed freaks,’ or those uncanny, unavoidable Evony ads,” writes Orland. “Others have noted how DLSS 5 seems to mangle the intended art direction by dampening shadows in favor of a homogenized look.”
Thomas Was Alone developer Mike Bithell said the technology seems designed “for when you absolutely, positively, don’t want any art direction in your gaming experience.”
Gunfire Games Senior Concept Artist Jeff Talbot added that “in every shot the art direction was taken away for the senseless addition of ‘details.’ Each DLSS 5 shot looked worse and had less character than the original. This is just a garbage AI Filter.”
DLSS 5’s “AI dogshit is actually depressing,” said New Blood Interactive founder and CEO Dave Oshry, adding that future generations “won’t even know this looks ‘bad’ or ‘wrong’ because to them it’ll be normal.”
Finance Bros To Tech Bros: Don’t Mess With My Bloomberg Terminal
An anonymous reader quotes a report from the Wall Street Journal:
A battle of insults and threats has broken out between the tech world and Wall Street. What’s got everyone so worked up? The same thing that starts most fights: business software. A series of social-media posts went viral in recent days with claims that AI has created a worthy — and way cheaper — alternative to the Bloomberg terminal, a computer system that is like oxygen to professional investors. Now “Bloomberg is cooked,” some posters argued as they heralded the arrival of a newly released AI tool from startup Perplexity. […]
The finance bros who worship at the altar of Bloomberg have declared war on the tech evangelists who have put all their faith in AI. To suggest that the terminal is replaceable is "laughable,” said Jason Lemire, who jumped into the conversation on LinkedIn. (Ironically or not, his post also included an AI-generated image of churchgoers praying to the Bloomberg terminal). “It seems quite obvious to me that those propagating that post are either just looking for easy engagement and/or have never worked in a serious financial institution,” he wrote. […] Morgan Linton, the co-founder and CTO of AI startup Bold Metrics and an avid Perplexity Computer user, said it’s rare for a single AI prompt to generate anything close to what Bloomberg does. That said, he added that tools like this can lay “a really good foundation for a financial application. And that really has not been possible before.”
Others aren’t so sure. Michael Terry, an institutional investment manager who used the terminal for more than 30 years, said he used a prompt circulating online to try to vibe code a Bloomberg replica on Anthropic’s Claude. “It was laughable at best, horrific at worst,” he said. Shevelenko acknowledged there are some aspects of the terminal that can’t be replicated with vibe coding, including some of Bloomberg’s proprietary data inputs. The live chat network, which includes 350,000 financial professionals in 184 countries, would also be hard to re-create, as well as the terminal’s data security, reliability and robust support system. “I love Bloomberg. And I know most people that use Bloomberg are very, very loyal and extremely happy,” said Lemire. His message to the techies? “There’s nothing that you can vibe code in a weekend or even like over the course of a year that’s going to come anywhere close.”
Samsung Ends $2,899 Galaxy Z TriFold Sales After Just Three Months
Samsung is reportedly ending sales of the Galaxy Z TriFold just months after launch, likely due to “high production costs” and limited supply. 9to5Google reports:
The Galaxy Z TriFold launched in South Korea barely four months ago, arriving in Samsung’s home market ahead of a larger debut in the U.S. and other markets in January. The $2,899 smartphone brought an entirely new form factor to the foldable market, but it’s apparently very short-lived.
Korean media reports (via SamMobile) that Samsung is planning to end sales of the Galaxy Z TriFold in Korea, with one more restock coming in the country this week. In the United States, the report mentions that the TriFold will be available until “the current production volume is sold out,” which sounds like we might only get another restock or two here as well.
Nvidia Expects To Sell ‘At Least’ $1 Trillion In AI Chips By 2028
An anonymous reader quotes a report from TechCrunch:
Nvidia CEO Jensen Huang threw out a lot of numbers — mostly of the technical variety — during his keynote Monday to kick off the company’s annual GTC Conference in San Jose, California. But there was one financial figure that investors surely took notice of: his projection that there will be $1 trillion worth of orders for Nvidia’s Blackwell and Vera Rubin chips, a monetary reflection of a booming AI business.
About an hour into his keynote, Huang noted that last year Nvidia saw about $500 billion in demand for its Blackwell and upcoming Rubin chips through 2026. “Now, I don’t know if you guys feel the same way, but $500 billion is an enormous amount of revenue,” he said. “Well, I’m here to tell you that right now where I stand — a few short months after GTC DC, one year after last GTC — right here where I stand, I see through 2027, at least $1 trillion.”
Are Split Spacebars the Next Big Gaming Keyboard Trend?
“There are countless upgrades you could make to your gaming setup,” writes PC Gamer’s Jacob Ridley. “A wireless this, a bigger that, a faster thing. But how do you know what’s going to be a genuine upgrade worth investing in? Personally, I think it might be split spacebars.” His argument centers on the fact that spacebars take up a “greedy” amount of keyboard space — space that could instead be divided into multiple keys for different actions, such as voice chat or melee attacks. From the report:
While it’s often very easy to reprogram your spacebar to do a different action via your keyboard’s software, it’s a lot harder to reprogram your brain to hit any other key when you try to jump in game. Spacebar makes you jump. Everyone knows that; it’s practically etched onto your brain if you’re a long-time mouse and keyboard player. So, why does a split spacebar help with that? It comes down to this: once you know which side of a spacebar you tend to thwack with your thumb, you can program the other side to do whatever you want. I hit the right-side of my spacebar every time when I’m typing. Therefore, when I started using a Wooting 60HE v2 with a split spacebar, I set the left-side to be the delete key; the keyboard lacking a dedicated delete key for its 60% size.
Though for gaming, the split spacebar offers much more varied purpose. People do strange things with the WASD keys that I won’t litigate here, but I’m pretty sure most gamers use their left thumb to strike the spacebar for gaming. Right? Right. If you fall into this category, you have the option of using the right-side spacebar for things like a chunky melee key, or, my personal favorite, an in-game voice chat key.
US SEC Preparing To Scrap Quarterly Reporting Requirement
The U.S. SEC is reportedly preparing a proposal to make quarterly earnings reports optional, potentially allowing companies to report results just twice a year. “The proposal could be published as soon as next month,” reports Reuters, citing a paywalled report from the Wall Street Journal, adding that “regulators are in talks with major exchanges to discuss how their rules may need to be adjusted.” Reuters reports:
The SEC will vote on the proposal once it is published, after a public comment period which typically lasts at least 30 days, the report said. The WSJ report added that the rule is expected to make quarterly reporting optional and not eliminate it altogether. The proposed change in the reporting standard would allow listed companies to publish results every six months instead of the current mandate to report figures every 90 days.
Trump, who first floated the idea in his first term as president, has argued the change in requirements would discourage shortsightedness from public companies while cutting costs. Skeptics, however, caution delaying disclosures could reduce transparency and heighten market volatility.
Re:When I meet strangers in the ocean