Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.
College Student, Cat Meme Helped Crack Massive Botnet Case
The Wall Street Journal shares the "wild behind-the-scenes story" of how the world’s largest and most destructive botnet was uncovered and taken down, writes Slashdot reader sturgeon. “At times, the network known as Kimwolf included more than a million compromised home Android devices and digital photo frames — enough DDoS firepower to disrupt internet traffic across the U.S. and beyond.” From the report:
Sitting in his dorm room at the Rochester Institute of Technology, Benjamin Brundage was closing in on a mystery that had even seasoned internet investigators baffled. A cat meme helped him crack the case. A growing network of hacked devices was launching the biggest cyberattacks ever seen on the internet. It had become the most powerful cyberweapon ever assembled, large enough to knock a state or even a small country offline. Investigators didn’t know exactly who had built it — or how. Brundage had been following the attacks, too — and, in between classes, was conducting his own investigation. In September, the college senior started messaging online with an anonymous user who seemed to have insider knowledge.
As they chatted on Discord, a platform favored by videogamers, Brundage was eager to get more information, but he didn’t want to come off as too serious and shut down the conversation. So every now and then he’d send a funny GIF to lighten the mood. Brundage was fluent in the memes, jokes and technical jargon popular with young gamers and hackers who are extremely online. “It was a bit of just asking over and over again and then like being a bit unserious,” said Brundage. At one point, he asked for some technical details. He followed up with the cat meme: a six-second clip that showed a hand adjusting a necktie on a fluffy gray cat. Brundage didn’t expect it to work, but he got the information. “It took me by surprise,” he said.
Eventually the leaker hinted there was a new vulnerability on the internet. Brundage, who is 22, would learn it threatened tens of millions of consumers and as much as a quarter of the world’s corporations. As he unraveled the mystery, he impressed veteran researchers with his findings — including federal law enforcement, which took action against the network two weeks ago. Chad Seaman, a researcher at Akamai, joked at one point that the internet could go down if Brundage spent too much time on his exams.
Penalties Stack Up As AI Spreads Through the Legal System
Tony Isaac shares a report from NPR:
When it comes to using AI, it seems some lawyers just can’t help themselves. Last year saw a rapid increase in court sanctions against attorneys for filing briefs containing errors generated by artificial intelligence tools. The most prominent case was that of the lawyers for MyPillow CEO Mike Lindell, who were fined $3,000 each for filing briefs containing fictitious, AI-generated citations. But as a cautionary tale, it doesn’t seem to have had much effect. The numbers started taking off last year, and the rate is still increasing. He counts a total of more than 1,200 to date, of which about 800 are from U.S. courts.
“I am surprised that people are still doing this when it’s been in the news,” says Carla Wale, associate dean of information & technology and director of the law library at the University of Washington School of Law. “Whatever the generative AI tool gives you — as in, ‘Look at these cases’ — you, under the rules of professional conduct, you have to read those cases. You have to read the cases to make sure what you are citing is accurate.”
“I think that lawyers who understand how to effectively and ethically use generative AI replace lawyers who don’t,” she says. “That’s what I think the future is.”
Half of Planned US Data Center Builds Have Been Delayed or Canceled
Despite hundreds of billions of dollars in investment, nearly half of planned U.S. data center projects are being delayed or canceled. “One major reason behind these setbacks is the availability of key electrical components — such as transformers, switchgear, and batteries — that are used both at data center sites and outside of them,” reports Tom’s Hardware. “Meanwhile, grid infrastructure is also stressed by electric vehicles and electrified heating systems.” Tom’s Hardware reports:
Approximately 12 gigawatts (12 GW) of data center capacity is expected to come online in the U.S. in 2026, according to data by market intelligence firm Sightline Climate cited by Bloomberg. Yet only about one-third of that capacity is currently under active construction because of various constraints.
Electrical infrastructure represents less than 10% of total data center cost, but it is as vital as compute hardware. A delay in any single element of the power chain can halt the entire project, which makes transformers, switchgear, and similar devices critical items despite their relatively small share of CapEx. Due to high demand, lead times for high-power transformers have expanded dramatically in the U.S.: delivery typically took 24 to 30 months before 2020, but waiting periods can stretch to as long as five years today, according to Sightline Climate cited by Bloomberg. For AI data centers, this is a catastrophe as their deployment cycles are under 18 months.
To address shortages, companies are turning to global markets. As a result, Canada, Mexico, and South Korea became the biggest suppliers of high-power transformers for AI data centers to AI data centers. At the same time, imports of high-power transformers from China surged from fewer than 1,500 units in 2022 to more than 8,000 units in 2025 through October, according to Wood Mackenzie data cited by Bloomberg. The volatility of exports from China does not end with transformers, as the PRC accounts for over 40% of U.S. battery imports, while its share in certain transformer and switchgear categories remains near 30%, according to Bloomberg.
Perplexity’s ‘Incognito Mode’ Is a ‘Sham,’ Lawsuit Says
An anonymous reader quotes a report from Ars Technica:
Perplexity’s AI search engine encourages users to go deeper with their prompts by engaging in chat sessions that a lawsuit has alleged are often shared in their entirety with Google and Meta without users’ knowledge or consent. “This happened to every user regardless of whether or not they signed up for a Perplexity account,” the lawsuit alleged, while stressing that “enormous volumes of sensitive information from both subscribed and non-subscribed users” are shared.
Using developer tools, the lawsuit found that opening prompts are always shared, as are any follow-up questions the search engine asks that a user clicks on. Privacy concerns are seemingly worse for non-subscribed users, the complaint alleged. Their initial prompts are shared with “a URL through which the entire conversation may be accessed by third parties like Meta and Google.” Disturbingly, the lawsuit alleged, chats are also shared with personally identifiable information (PII), even when users who want to stay anonymous opt to use Perplexity’s “Incognito Mode.” That mode, the lawsuit charged, is a “sham.”
"‘Incognito’ mode does nothing to protect users from having their conversations shared with Meta and Google,” the complaint said. “Even paid users who turned on the ‘Incognito’ feature still had their conversations shared with Meta and Google, along with their email addresses and other identifiers that allowed Meta and Google to personally identify them.”
“Perplexity’s failure to inform its users that their personal information has been disclosed to Meta and Google or to take any steps to halt the continued disclosure of users’ information is malicious, oppressive, and in reckless disregard” of users’ rights, the lawsuit alleged.
“Nothing on Perplexity’s website warns users that their conversations with its AI Machine will be shared with Meta and Google,” Doe alleged. “Much less does Perplexity warn subscribed users that its ‘Incognito Mode’ does not function to protect users’ private conversations from disclosure to companies like Meta and Google.”
Python Blood Could Hold the Secret To Healthy Weight Loss
Longtime Slashdot reader fahrbot-bot writes:
CU Boulder researchers are reporting that they have discovered an appetite-suppressing compound in python blood that helps the snakes consume enormous meals and go months without eating yet remain metabolically healthy. The findings were published in the journal Natural Metabolism on March 19, 2026.
Pythons can grow as big as a telephone pole, swallow an antelope whole, and go months or even years without eating — all while maintaining a healthy heart and plenty of muscle mass. In the hours after they eat, research has shown, their heart expands 25% and their metabolism speeds up 4,000-fold to help them digest their meal. The team measured blood samples from ball pythons and Burmese pythons, fed once every 28 days, immediately after they ate a meal. In all, they found 208 metabolites that increased significantly after the pythons ate. One molecule, called para-tyramine-O-sulfate (pTOS) soared 1,000-fold.
Further studies, done with Baylor University researchers, showed that when they gave high doses of pTOS to obese or lean mice, it acted on the hypothalamus, the appetite center of the brain, prompting weight loss without causing gastrointestinal problems, muscle loss or declines in energy. The study found that pTOS, which is produced by the snake’s gut bacteria, is not present in mice naturally. It is present in human urine at low levels and does increase somewhat after a meal. But because most research is done in mice or rats, pTOS has been overlooked.
“We’ve basically discovered an appetite suppressant that works in mice without some of the side-effects that GLP-1 drugs have,” said senior author Leslie Leinwand, a distinguished professor of Molecular, Cellular and Developmental Biology who has been studying pythons in her lab for two decades. Drugs like Ozempic and Wegovy act on the hormone glucagon-like petide-1 (GLP-1).
Renewables Reached Nearly 50% of Global Electricity Capacity Last Year
Renewables made up nearly half of global installed electricity capacity by the end of 2025, “accounting for 85.6% of global capacity expansion,” reports the Register, citing the International Renewable Energy Agency’s (IRENA) 2026 Renewable Capacity Statistics report. “Per IRENA’s data, that aforementioned 85.6 percent share of new power capacity additions was actually a decrease from 2024, when renewables were about 92 percent of global capacity additions. Yes, the share of total installed power capacity in 2025 rose again, but non-renewable capacity additions also rebounded sharply last year.” From the report:
Solar, in turn, was the dominant renewable technology, accounting for nearly three-quarters of last year’s renewable capacity additions. Those additions totaled 692 GW in 2025, lifting installed renewable capacity by a record 15.5 percent year over year, IRENA noted. By the end of last year, renewables accounted for 49.4 percent of global installed electricity capacity, while variable renewable sources such as solar and wind represented roughly 35 percent of total capacity. For reference, it was only in 2023 that renewable energy sources crossed the threshold of generating 30 percent of the world’s electricity.
EPA Flags Microplastics, Pharmaceuticals As Contaminants In Drinking Water
An anonymous reader quotes a report from NPR:
Responding to public health concerns about microplastics and pharmaceuticals in the nation’s drinking water, the Trump administration for the first time has placed them on a draft list of contaminants maintained by the Environmental Protection Agency. The EPA announced the move Thursday, touting it as a “historic step” for the Make America Healthy Again, or MAHA, movement, which often raises concerns about toxic chemicals and plastic pollution in our food and environment. Also Thursday, the Department of Health and Human Services announced a $144 million initiative, called STOMP, to develop tools to measure and monitor microplastics in drinking water and in a later stage, to remove them.
The Safe Drinking Water Act requires the EPA to publish an updated version of its Contaminant Candidate List every five years. This is the sixth iteration of the list. Microplastics and pharmaceuticals appear in the draft of the upcoming list, alongside per- and polyfluoroalkyl substances, or PFAS, and dozens of other chemicals and microbes. Their inclusion on the list gives local regulators a tool to evaluate risks in their water supply, the EPA says, and it can set the stage for more research and regulatory action — but doesn’t actually guarantee that will happen.
Mount Everest Climbers ‘Poisoned’ By Guides In Insurance Fraud Scheme
schwit1 shares a report from the Kathmandu Post:
In Nepal, helicopter rescue on high altitude is, by any measure, a genuine lifesaving operation. At high altitude, where oxygen thins and weather changes without warning, the ability to airlift a stricken trekker to Kathmandu within hours has saved countless lives. But threaded through that legitimate system, exploiting its urgency, its opacity, and its distance from oversight, is one of the most sophisticated insurance fraud networks in the world. Nepal’s fake rescue scam is not new. The Kathmandu Post first exposed it in 2018. Months later, the government convened a fact-finding committee, produced a 700-page report, and announced reforms. In February 2019, The Kathmandu Post published a long investigative report. Last year, Nepal Police’s Central Investigation Bureau reopened the file, and what they found is that the fraud did not stop — instead it was growing.
The mechanics of the fake rescue racket are straightforward: stage a medical emergency, call in a helicopter, check a tourist into a hospital, and file an insurance claim that bears little resemblance to what actually happened. But the sophistication lies in how each link in the chain is compensated, and how difficult it is for a foreign insurer — operating from Australia and the United Kingdom — to verify events that occurred at 3,000 metres in a remote Himalayan valley. The CIB investigation identifies two primary methods for manufacturing an “emergency.” The first involves tourists who simply don’t want to walk back. After completing a demanding trek — an Everest Base Camp trek, for instance, can take up to two weeks on foot — guides offer an alternative: pretend to be sick, and a helicopter will come. The guide handles the rest. The second method is more troubling. At altitudes above 3,000 meters, mild symptoms of altitude sickness are common. Blood oxygen saturation can drop, hands and feet tingle, headaches develop. In most cases, rest, hydration or a gradual descent is all that is needed. But guides and hotel staff, according to the CIB investigation, have been trained to terrify trekkers at precisely this moment. They tell them they are at risk of dying, that only immediate evacuation will save them. In some cases, investigators found that Diamox (Acetazolamide) tablets, used to prevent altitude sickness, were administered alongside excessive water intake to induce the very symptoms that would justify a rescue call.
In at least one case cited in the investigation, baking powder was mixed into food to make tourists physically unwell. Once a “rescue” is called, the financial choreography begins. A single helicopter carries multiple passengers. But separate, full-price invoices are submitted to each passenger’s insurance company, as if each had their own dedicated flight. A $4,000 charter becomes a $12,000 claim. Fake flight manifests and load sheets are fabricated. At the hospital, medical officers prepare discharge summaries using the digital signatures of senior doctors who were never involved in the case. In some cases, these are done without those doctors’ knowledge. Fake admission records are created for tourists who were, in some documented instances, drinking beer in the hospital cafeteria at the time they were supposedly receiving treatment. In one case, an office assistant at Shreedhi Hospital admitted that he had provided his own X-ray report taken about a year ago at a different hospital, to be used as a case for treatment of foreign trekkers to claim insurance. The commission structure that holds the network together was described in detail during police interrogations. Hospitals pay 20 to 25 percent of the insurance payment to trekking companies and a further 20 to 25 percent to helicopter rescue operators in exchange for patient referrals. Trekking guides and their companies benefit from inflated invoices. In some cases, tourists themselves are offered cash incentives to participate.
OpenAI Acquires Popular Tech-Industry Talk Show TBPN
OpenAI is acquiring tech news podcast TBPN, a fast-growing daily show hosted by John Coogan and Jordi Hays. OpenAI says TBPN will keep its editorial independence, even though the acquisition is widely viewed as part of a broader effort to influence public discourse around AI. CNBC reports:
In the announcement, OpenAI CEO of AGI Deployment Fidji Simo wrote that their mission of bringing artificial general intelligence comes with a responsibility to have a space for “constructive conversation about the changes AI creates.” Altman has appeared on TBPN multiple times and is a frequent presence across media and podcasts, even hitting NBC’s “Tonight Show Starring Jimmy Fallon” in December.
The announcement says TBPN will maintain editorial independence and continue to choose its own guests. “TBPN is my favorite tech show. We want them to keep that going and for them to do what they do so well,” Altman wrote in a post on X. “I don’t expect them to go any easier on us, am sure I’ll do my part to help enable that with occasional stupid decisions.” OpenAI did not disclose the terms of the deal but said TBPN will be housed within its strategy organization.
“While we’ve been critical of the industry at times, after getting to know Sam and the OpenAI team, what stood out most was their openness to feedback and commitment to getting this right,” wrote Hays in a statement. “Moving from commentary to real impact in how this technology is distributed and understood globally is incredibly important to us.”
Amazon Imposes 3.5% Fuel Surcharge For Many Online Merchants
An anonymous reader quotes a report from Bloomberg:
Amazon will start charging sellers who use its shipping services a 3.5% “fuel and logistics” surcharge later this month, joining the ranks of shipping companies raising prices as the war in Iran pushes oil prices higher. The fees take effect on April 17 for customers of the company’s Fulfillment by Amazon service — which is used by many of the independent sellers who list their products on Amazon’s retail sites — in the US and Canada. Items shipped by Amazon on behalf of merchants who sell on their own sites or at other retailers will carry the surcharge beginning May 2.
“Elevated costs in fuel and logistics have increased the cost of operating across the industry,” Ashley Vanicek, an Amazon spokesperson, said on Thursday. “We have absorbed these increases so far, but similar to other major carriers, when costs remain elevated we implement temporary surcharges to partially recover these costs.”
Vanicek notes that the fee will apply to the sum Amazon charges to ship an item, not the product’s sale price.
Last month, USPS announced that it would impose its first-ever fuel surcharge on packages.
IBM Teams Up With Arm To Run Arm Workloads On IBM Z Mainframes
IBM and Arm are teaming up to let Arm-based software run on IBM Z mainframes. Network World reports:
The two companies plan to work on three things: building virtualization tools so Arm software can run on IBM platforms; making sure Arm applications meet the security and data residency rules that regulated industries must follow; and creating common technology layers so enterprises have more software options across both platforms, IBM said in a statement.
IBM has not said whether the virtualization work will happen at the hypervisor level, through its existing PR/SM partitioning technology, or via containers — a question enterprise architects will need answered before they can assess the collaboration’s practical value. IBM described the effort as serving enterprises that run regulated workloads and cannot simply move them to the cloud, the statement said.
IBM mainframe customers have largely missed out on the efficiency and price-performance gains Arm has already delivered in the cloud. “Arm says close to half of all compute shipped to top hyperscalers in 2025 runs on Arm chips, with AWS, Google, and Microsoft deploying their own Arm silicon through Graviton, Axion, and Cobalt, respectively,” reports Network World.
That gap is precisely what IBM and Arm’s collaboration intends to address. “This is a mainframe adjacency play,” says Rachita Rao, senior analyst at Everest Group. “The intent is to extend IBM Z and LinuxONE environments by enabling Arm-compatible workloads to run closer to systems of record. While hyperscalers use Arm to lower their own internal power costs and pass savings to cloud-native tenants, IBM is targeting the sovereign and air-gapped market.”
Raspberry Pi 4 3GB Launches, Raspberry Pi Prices Go Up Again Due To RAM
AmiMoJo shares a report from Phoronix:
Raspberry Pi prices are going up yet again due to the continued memory squeeze on the industry. To help offset the memory prices for some use-cases, Raspberry Pi also announced the introduction of the Raspberry Pi 4 3GB model at $83 to help fill the void between the 2GB and 4GB options.
The 3GB Raspberry Pi 4 was announced at $83.75 USD for those not needing quite 4GB of RAM and looking to save some memory given the ongoing price increases. The Raspberry Pi 4 and Raspberry Pi 5 4GB models are seeing new $25 price increases, the 8GB models seeing $50 price increases, and the 16GB Raspberry Pi 5 is going up by $100. The Raspberry Pi 500+ is seeing a $150 price increase. The Raspberry Pi Compute Modules are also seeing increases from $11.25 to $100 USD.
Google Announces Gemma 4 Open AI Models, Switches To Apache 2.0 License
An anonymous reader quotes a report from Ars Technica:
Google’s Gemini AI models have improved by leaps and bounds over the past year, but you can only use Gemini on Google’s terms. The company’s Gemma open-weight models have provided more freedom, but Gemma 3, which launched over a year ago, is getting a bit long in the tooth. Starting today, developers can start working with Gemma 4, which comes in four sizes optimized for local usage. Google has also acknowledged developer frustrations with AI licensing, so it’s dumping the custom Gemma license.
Like past versions of its open-weight models, Google has designed Gemma 4 to be usable on local machines. That can mean plenty of things, of course. The two large Gemma variants, 26B Mixture of Experts and 31B Dense, are designed to run unquantized in bfloat16 format on a single 80GB Nvidia H100 GPU. Granted, that’s a $20,000 AI accelerator, but it’s still local hardware. If quantized to run at lower precision, these big models will fit on consumer GPUs. Google also claims it has focused on reducing latency to really take advantage of Gemma’s local processing. The 26B Mixture of Experts model activates only 3.8 billion of its 26 billion parameters in inference mode, giving it much higher tokens-per-second than similarly sized models. Meanwhile, 31B Dense is more about quality than speed, but Google expects developers to fine-tune it for specific uses.
The other two Gemma 4 models, Effective 2B (E2B) and Effective 4B (E4B), are aimed at mobile devices. These options were designed to maintain low memory usage during inference, running at an effective 2 billion or 4 billion parameters. Google says the Pixel team worked closely with Qualcomm and MediaTek to optimize these models for devices like smartphones, Raspberry Pi, and Jetson Nano. Not only do they use less memory and battery than Gemma 3, but Google also touts “near-zero latency” this time around.
The Apache 2.0 license is much more flexible with its terms of use for commercial restrictions, “granting you complete control over your data, infrastructure, and models,” says Google.
Clement Delangue, co-founder and CEO of Hugging Face, called it “a huge milestone” that will help developers use Gemma for more projects and expand what Google calls the "Gemmaverse.”
Artemis II Astronauts Have ‘Two Microsoft Outlooks’ and Neither Work
Even on NASA’s Artemis II mission around the moon, astronauts apparently still have to deal with broken Microsoft Outlook. One of the crew members, Reid Wiseman, jokingly reported that he had “two Microsoft Outlooks” and neither worked. 404 Media reports:
On April 1, four astronauts from the U.S. and Canada embarked on a 10-day flight to loop around the moon. Spotted by VGBees podcast host Niki Grayson on the NASA livestream of live views from the , around 2 a.m. ET, mission control acknowledges an issue with a process control system and offers to remote in — yes, like how your office IT guy would pause his CoD campaign to log into Okta for you because you used the wrong password too many times.
One of the astronauts, Reid Wiseman, says that’s chill, but while they’re in there: “I also see that I have two Microsoft Outlooks, and neither one of those are working.” Astronauts are trained for decades in some of the most physically and mentally grueling environments of any career. They’re some of the smartest people on the planet, and they have to be, before we strap them to 3.2 million pounds of jet fuel and make them do complex experiments and high-stakes decisions for days on end. And yet, once they get up there, fucking Outlook is borked.
Nvidia Rolls Out Its Fix For PC Gaming’s ‘Compiling Shaders’ Wait Times
Nvidia has begun rolling out a beta feature that automatically compiles game shaders while a PC is idle. It won’t eliminate shader compilation the first time a game runs, but Ars Technica reports it could help reduce those repeated wait times. From the report:
Nvidia’s new Auto Shader Compilation system promises to “reduc[e] the frequency of game runtime compilation after driver updates” for users running Nvidia’s GeForce Game Ready Driver 595.97 WHQL or later. When the feature is active and your machine is idle, the app will automatically start rebuilding DirectX drivers for your games so they’re all set to roll the next time they launch.
While the feature defaults to being turned off when the Nvidia App is first downloaded, users can activate it by going to the Graphics Tab > Global Settings > Shader Cache. There, they can set aside disk space for precompiled shaders and decide how many system resources the compilation process should use. App users can also manually force shader recompilation through the app rather than waiting for the machine to go idle.
Unfortunately, Nvidia warns that users will still have to generate shaders in-game after downloading a title for the first time. The Auto Shader Compiler system only generates the new shaders needed after subsequent driver updates following that first run of a new title.
HappyCat