Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.
Wikipedia Bans Use of Generative AI
Wikipedia has banned the use of generative AI to write or rewrite articles, saying it “often violates several of Wikipedia’s core content policies.” That said, editors may still use it for translation or light refinements as long as a human carefully checks the copy for accuracy. Engadget reports:
Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs “can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.” Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.
“My genuine hope is that this can spark a broader change. Empower communities on other platforms, and see this become a grassroots movement of users deciding whether AI should be welcome in their communities, and to what extent,” Wikipedia administrator Chaotic Enby wrote. The administrator also called the policy a “pushback against enshittification and the forceful push of AI by so many companies in these last few years.”
Tracy Kidder, Author of ‘The Soul of a New Machine’, Dies At 80
Ancient Slashdot reader wiredog writes:
Tracy Kidder, author of "The Soul of a New Machine,” has died at the age of 80. “The Soul of a New Machine” is about the people who designed and built the Data General Nova, one of the 32 bit superminis that were released in the 1980’s just before the PC destroyed that industry. It was excerpted in The Atlantic.
“I’m going to a commune in Vermont and will deal with no unit of time shorter than a season.”
China Reviews $2 Billion Manus Sale To Meta As Founders Barred From Leaving Country
Chinese authorities have barred two Manus executives from leaving the country while investigating whether Meta’s reported $2 billion acquisition of the Singapore-based AI startup violated foreign investment reporting rules. “Manus was founded in China but last year relocated its headquarters and core team to Singapore,” notes the Financial Times. “Meta acquired it for $2 billion at the end of last year.” The Financial Times reports:
Manus’s chief executive Xiao Hong and chief scientist Ji Yichao were summoned to a meeting in Beijing with the National Development and Reform Commission this month, according to three people with knowledge of the matter. They said Xiao and Ji were questioned on potential violations of foreign direct investment rules related to its onshore Chinese entities.
After the meeting, the Singapore-based executives were told they were not allowed to leave China because of a regulatory review, while they remain free to travel within the country, two of the people said. No formal investigation has been opened and no charges have been brought. Manus is actively seeking law firms and consultancies to help resolve the matter, said a person with knowledge of the move.
Researchers At CERN Transport Antiprotons By Truck In World-First Experiment
An anonymous reader quotes a report from Physics World:
Researchers at the CERN particle-physics lab have successfully transported antiprotons in a lorry across the lab’s main site. The feat, the first of its kind, follows a similar test with protons in 2024. CERN says the achievement is “a huge leap” towards being able to transport antimatter between labs across Europe. […] To do so, in 2020 the BASE team began developing a device, known as BASE-STEP (for Baryon-Antibaryon Symmetry Experiment-Symmetry Tests in Experiments with Portable Antiprotons), to store and transport antiprotons. It works by trapping particles in a Penning trap composed of gold-plated cylindrical electrode stacks made from oxygen-free copper that is surrounded by a superconducting magnet bore operated at cryogenic temperatures.
The device, which also contains a carbon-steel vacuum chamber to shield the particles from stray magnetic fields, is then mounted on an aluminium frame. This allows it to be transported using standard forklifts and cranes and withstand the bumps and vibrations of transport. In 2024, BASE researchers used the device to transport a cloud of about 105 trapped protons across CERN’s Meyrin campus for four hours. After that feat, the researchers began to adjust BASE-STEP to handle antiprotons and yesterday the team successfully transported a trap containing a cloud of 92 antiprotons around the campus for 30 minutes, traveling up to 42 km/h.
With further improvements and tests, the team now hope to transport the antiprotons further afield. The first destination on the team’s list is the Heinrich Heine University (HHU) in Dusseldorf, Germany, which would take about eight hours. “This means we’d have to keep the trap’s superconducting magnet at a temperature below 8.2 K for that long,” says BASE-STEP’s leader Christian Smorra. “So, in addition to the liquid helium , we’d need to have a generator to power a cryocooler on the truck. We are currently investigating this possibility.” If possible to transport to HHU, physicists would then use the particles to search for charge-parity-time violations in protons and antiprotons with a precision at least 100 times higher than currently possible at CERN.
Reddit Takes On Bots With ‘Human Verification’ Requirements
Reddit is rolling out human-verification checks for accounts that show signs of bot-like behavior, while also labeling approved automated accounts that provide useful services. The social media company stressed that these checks will only happen if something appears “fishy,” and that it is “not conducting sitewide human verification.” TechCrunch reports:
To identify potential bots, Reddit is using specialized tooling that looks at account-level signals and other factors — like how quickly the account is attempting to write or post content. Using AI to write posts or comments, however, is not against its policies (though community moderators may set their own rules).
To verify an account is human, Reddit will leverage third-party tools like passkeys from Apple, Google, YubiKey, and other third-party biometric services, like Face ID or even Sam Altman’s World ID — or, in some countries, the use of government IDs. Reddit notes this last category may be required in some countries like the U.K. and Australia and some U.S. states, because of local regulations on age verification, but it’s not the company’s preferred method.
“If we need to verify an account is human, we’ll do it in a privacy-first way,” Reddit co-founder and CEO Steve Huffman wrote in the announcement Wednesday. “Our aim is to confirm there is a person behind the account, not who that person is. The goal is to increase transparency of what is what on Reddit while preserving the anonymity that makes Reddit unique. You shouldn’t have to sacrifice one for the other.”
Melania Trump Welcomes Humanoid Robot At White House Summit
Longtime Slashdot reader theodp writes:
In Melania and the Robot, the New York Times reports on First Lady Melania Trump’s inaugural Fostering the Future Together Coalition Summit, which brought together international leaders, First Spouses from around the world, tech leaders, educators, and nonprofits to collaborate on practical solutions that expand access to educational tools while strengthening protections for children in digital environments (Day 2 WH summary). The Times begins:
“On Wednesday, Mrs. Trump appeared at the White House alongside Figure 3, a humanoid, A.I.-powered robot whose uses, according to the company that makes it, include fetching towels, carrying groceries and serving champagne. But Mrs. Trump joins tech executives and some researchers in envisioning a world beyond robot butlery. She is interested in how these robots could cut it as educators. Both clad in shades of white, the first lady and the visiting robot walked into a gathering of first spouses from around the world, a group that included Sara Netanyahu of Israel, Olena Zelenska of Ukraine, and Brigitte Macron of France. The dulcet tones from a (presumably human) military orchestra played as the first lady and her guest entered the event. Both lady and robot extolled the virtues of further integrating robots into the educational and social lives of children. In the history of modern first-lady initiatives, which have included building a national book festival (Laura Bush), reshuffling the food pyramid (Michelle Obama) and advocating for free community college (Jill Biden), Mrs. Trump’s involvement of a humanoid robot in education policy was a first.”
“Figure 3 delivered brief remarks and delivered salutations in several languages. With its sleek black-and-white appearance, Figure 3 would fit right in with the first lady’s branding aesthetic, which includes a self-titled coffee table book and movie, not least because the name “MELANIA” was emblazoned on the side of its glossy plastic head. After Figure 3 teetered gingerly away, Mrs. Trump looked around the room and told them that the future looked a lot like what they had just witnessed. ‘The future of A.I. is personified,’ she told her audience. ‘It will be formed in the shape of humans. Very soon artificial intelligence will move from our mobile phones to humanoids that deliver utility.’ She invited her guests to envision a future in which a robot philosopher educated children.”
Brazil’s UFO Capital Marks 30 Years Since ‘Alien Encounter’
Thirty years after the alleged 1996 "ET of Varginha" encounter, debate continues to rage over the events that happened in Brazil’s self-styled UFO capital. An anonymous reader quotes an excerpt from the Guardian:
The skies over this far-flung coffee-growing hub went charcoal black, the heavens opened and one of Brazil’s greatest mysteries was born. “It really was something unique,” recalls Marco Antonio Reis, a zoo director, who was at his ranch outside Varginha one stormy day in January 1996 when, he says, an otherworldly creature came to town. Reis and other locals claim the unusually ferocious downpour heralded a series of disturbing and seemingly paranormal events. At least six of the zoo’s animals, including a spider monkey, a tapir and a raccoon, died mysteriously after a horned interloper with bulging red eyes was spotted in the vicinity by a woman who had gone out for a smoke. When a vet examined their corpses, “they were all black inside,” Reis claims.
On a nearby wasteland, three young women spotted a peculiar and malodorous being with a heart-shaped face and three lumps on its head cowering beside a wall. “I’ve seen the devil,” one of those witnesses would later tell her mum. Soon afterwards, an unexplained infection was rumored to have killed a strapping police intelligence officer who was said to have grappled with the oleaginous unidentified being. Three decades later, Reis says he is convinced Varginha received a non-human visit. His only doubt was from where it came.
“We don’t know if it was extraterrestrial or intraterrestrial,” the 71-year-old says as he climbs a staircase to the veranda where the smoker claims to have seen what, in reference to Steven Spielberg’s 1982 film, became known as the “ET of Varginha”. A 2ft statue of a two-toed alien now marks the spot. “It’s possible it was an intraterrestrial, from inside the Earth They don’t just come from space,” Reis says. “It might have come from the depths of the Earth, too. We don’t even know what it’s like at the bottom of the sea, do we?”
Postal Service to Impose Its First-Ever Fuel Surcharge on Packages
The U.S. Postal Service plans to impose its first-ever fuel surcharge on packages (source paywalled; alternative source), adding an 8% fee starting in April as it struggles with rising fuel costs and ongoing financial pressure. The surcharge will not apply to letter mail and is currently expected to remain in place until January 2027. The Wall Street Journal reports:
Other parcel carriers, including FedEx and United Parcel Service, have imposed fuel surcharges, as well as a basket of other surcharges and fees, for years. Both FedEx and UPS have dramatically raised their fuel surcharges in recent weeks as the price of oil has increased amid the turmoil in the Middle East. […] The post office has been trying to increase the volume of packages it delivers. It previously differentiated itself from commercial carriers by saying that it doesn’t apply residential, Saturday delivery or fuel or remote-delivery surcharges.
Canada’s Immigration Rejected Applicant Based On AI-Invented Job Duties
New submitter haroldbasset writes:
Canada’s Immigration Department rejected an applicant because the duties of her current job did not match the Canadian work experience she had claimed, but the Department’s AI assistant had invented that work experience. She has been working in Canada as a health scientist — she has a Ph.D. in the immunology of aging — but the AI genius instead described her as “wiring and assembling control circuits, building control and robot panels, programming and troubleshooting.”
“It’s believed to be the first time that the department explicitly referred to the use of generative AI to support application processing in immigration refusals,” reports the Toronto Star. “The disclaimer also noted that all generated content was verified by an officer and that generative AI was not used to make or recommend a decision.”
The applicant’s lawyer was shocked “how any human being could make this decision.” “Somehow, it hallucinated my client’s job description,” he said. “I would love to see what the officer saw. Something seriously went wrong here.”
The applicant’s refusal came just as Canada’s Immigration Department released its first AI strategy, which frames artificial intelligence as a way to improve efficiency, service delivery, and program integrity. The department says it has long used digital tools like analytics and automation to flag fraud risks and triage applications, and is now also experimenting with generative AI for tasks such as research, summarizing, and analysis. In this case, however, the department insisted the decision was made by a human officer and that generative AI was not involved in the final decision.
Apple Can Create Smaller On-Device AI Models From Google’s Gemini
Apple reportedly has full access to customize Google’s Gemini model, allowing it to distill smaller on-device AI models for Siri and other features that can run locally without an internet connection. MacRumors reports:
The Information explains that Apple can ask the main Gemini model to perform a series of tasks that provide high-quality results, with a rundown of the reasoning process. Apple can feed the answers and reasoning information that it gets from Gemini to train smaller, cheaper models. With this process, the smaller models are able to learn the internal computations used by Gemini, producing efficient models that have Gemini-like performance but require less computing power.
Apple is also able to edit Gemini as needed to make sure that it responds to queries in a way that Apple wants, but Apple has been running into some issues because Gemini has been tuned for chatbot and coding applications, which doesn’t always meet Apple’s needs.
Supreme Court Sides With Internet Provider In Copyright Fight Over Pirated Music
Longtime Slashdot reader JackSpratts writes:
The Supreme Court unanimously said on Wednesday that a major internet provider could not be held liable for the piracy of thousands of songs online in a closely watched copyright clash. Music labels and publishers sued Cox Communications in 2018, saying the company had failed to cut off the internet connections of subscribers who had been repeatedly flagged for illegally downloading and distributing copyrighted music. At issue for the justices was whether providers like Cox could be held legally responsible and required to pay steep damages — a billion dollars or more in Cox’s case — if they knew that customers were pirating music but did not take sufficient steps to terminate their internet access.
In its opinion released (PDF) on Wednesday, the court said a company was not liable for “merely providing a service to the general public with knowledge that it will be used by some to infringe copyrights.” Writing for the court, Justice Clarence Thomas said a provider like Cox was liable “only if it intended that the provided service be used for infringement” and if it, for instance, “actively encourages infringement.” Justice Sonia Sotomayor, joined by Justice Ketanji Brown Jackson, wrote separately to say that she agreed with the outcome but for different reasons. […]
Cox called the court’s unanimous decision a “decisive victory” for the industry and for Americans who “depend on reliable internet service.”
“This opinion affirms that internet service providers are not copyright police and should not be held liable for the actions of their customers,” the company said.
Stephen Colbert To Write Next ‘Lord of the Rings’ Movie
An anonymous reader quotes a report from CNN:
Stephen Colbert already has a new job lined up for when he ends his 11-year run as host of “The Late Show” in May — the comedian and well-known J.R.R. Tolkien superfan announced he will co-write and develop a new film in the blockbuster “Lord of the Rings” franchise. Colbert joined “LOTR” director Peter Jackson to reveal the news in a video announcement.
“I’m pretty happy about it. You know what the books mean to me and what your films mean to me,” the late-night host told Jackson, who led the Oscar-winning team behind the nearly $6 billion original “Lord of the Rings” and “The Hobbit” trilogies. […] Colbert said the next installment will be based on parts of Tolkien’s “The Fellowship of the Ring” book that didn’t make it into the original movies. “The thing I found myself reading over and over again were the six chapters early on in (The Fellowship of the Ring) that y’all never developed into the first movie back in the day … and I thought, ‘Oh, wait, maybe that could be its own story that could fit into the larger story.’" he said.
Colbert said he discussed the idea with his son, screenwriter Peter McGee, to work out the framing of the story. “It took me a few years to scrape my courage into a pile and give you a call, but about two years ago, I did. You liked it enough to talk to me about it,” Colbert told Jackson. Colbert said he, McGee and Jackson have been working alongside screenwriter Philippa Boyens on the development of the story. “I could not be happier to say that they loved it, and so that’s what we’re going to be working on,” Colbert said.
Colbert’s LOTR movie, tentatively titled "Shadow of the Past,” will be the second of two new upcoming films in the franchise from Warner Bros. Discovery. The first of which is called "The Hunt for Gollum" due to be released in 2027.
Meta and YouTube Found Negligent in Landmark Social Media Addiction Case
A jury found Meta and YouTube negligent in a landmark social media addiction case, ruling that addictive design features such as infinite scroll and algorithmic recommendations harmed a young user and contributed to her mental health distress. The verdict awards $3 million in compensatory damages so far and could pave the way for more lawsuits seeking financial penalties and product changes across the social media industry. “Meta is responsible for 70 percent of that cost and YouTube for the remainder,” notes The New York Times. “TikTok and Snap both settled with the plaintiff for undisclosed terms before the trial started.” From the report:
The bellwether case, which was brought by a now 20-year-old woman identified as K.G.M., had accused social media companies of creating products as addictive as cigarettes or digital casinos. K.G.M. sued Meta, which owns Instagram and Facebook, and Google’s YouTube over features like infinite scroll and algorithmic recommendations that she claimed led to anxiety and depression.
The jury of seven women and five men will deliberate further to decide what further punitive damages the companies should pay for malice or fraud. The verdict in K.G.M.‘s case — one of thousands of lawsuits filed by teenagers, school districts and state attorneys general against Meta, YouTube, TikTok and Snap, which owns Snapchat — was a major win for the plaintiffs. The finding validates a novel legal theory that social media sites or apps can cause personal injury. It is likely to factor into similar cases expected to go to trial this year, which could expose the internet giants to further financial damages and force changes to their products.
The verdict also comes on the heels of a New Mexico jury ruling that found Meta liable for violating state law by failing to protect users of its apps from child predators.
Meta Loses Trial After Arguing Child Exploitation Was ‘Inevitable’
Meta lost a child safety trial in New Mexico after a court found that its platforms failed to adequately protect children from exploitation and misled parents about app safety. According to Ars Technica, the jury on Tuesday “deliberated for only one day before agreeing that Meta should pay $375 million in civil damages…” While the jury declined to impose the maximum penalty New Mexico sought, which could have cost the company $2.2 billion, Meta may still face additional financial penalties and could be forced to make changes to its apps. From the report:
The trial followed a 2023 lawsuit filed by New Mexico Attorney General Raul Torrez after The Guardian published a two-year investigation exposing child sex trafficking markets on Facebook and Instagram. Torrez’s office then conducted an undercover investigation codenamed “Operation MetaPhile,” in which officers posed as children on Facebook, Instagram, and WhatsApp. The jury heard that these fake profiles were “simply inundated with images and targeted solicitations” from child abusers, Torrez told CNBC in 2024. Ultimately, three men were arrested amid the sting for attempting to use Meta’s social networks to prey on children. At trial, Mark Zuckerberg and Instagram chief Adam Mosseri testified that “harms to children, such as sexual exploitation and detriments to mental health, were inevitable on the company’s platforms due to their vast user bases,” The Guardian reported. Internal messages and documents, as well as testimony from child safety experts within and outside the company, showed that Meta repeatedly ignored warnings and failed to fix platforms to protect kids, New Mexico’s AG successfully argued.
Perhaps most troubling to the jury, law enforcement and the National Center for Missing and Exploited Children also testified that Meta’s reporting of crimes to children on its apps — including child sexual abuse materials (CSAM) — was “deficient,” The Guardian reported. Rather than make it easy to trace harms on its platforms, the jury learned from frustrated cops that Meta “generated high volumes of ‘junk’ reports by overly relying on AI to moderate its platforms.” This made its reporting “useless” and “meant crimes could not be investigated,” The Guardian reported.
Celebrating the win as a “historic victory,” Torrez told CNBC that families had previously paid the price for “Meta’s choice to put profits over kids’ safety.” “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew,” Torrez said. “Today the jury joined families, educators, and child safety experts in saying enough is enough.”
Meta said the company plans to appeal the verdict. “We respectfully disagree with the verdict and will appeal,” Meta’s spokesperson said. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content. We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”
AI Economy Is a ‘Ponzi Scheme,’ Says AI Doc Director
An anonymous reader quotes a report from Vanity Fair:
Focus Features is releasing The AI Doc: Or How I Became an Apocaloptimist in theaters on March 27. If you’re even slightly interested in what’s going on with AI, it’s required viewing: The film touches on all aspects of the technology, from how it’s currently being used to how it will be used in the near future, when we potentially reach the age of artificial general intelligence, or AGI. AGI is a theoretical form of AI that supposedly would be able to perform complex tasks without each step being prompted by a human user — the point at which machines become autonomous, like Skynet in the Terminator franchise. […]
[Director Daniel Roher] interviews nearly all the major players in the AI space: Sam Altman of OpenAI; the Amodei siblings of Anthropic; Demis Hassabis of DeepMind (Google’s AI arm); theorists and reporters covering the subject. Notably absent are Elon Musk and Mark Zuckerberg. “Have you seen that guy speak? He’s like a lizard man,” Roher says regarding Zuckerberg. “Musk said yes initially, but it was right when he was doing all the stuff with Trump, and we just got ghosted after a while,” adds [codirector Charlie Tyrell]. Altman, arguably AI’s greatest mascot, is prominently featured in the documentary. But Roher wasn’t buying it. “That guy doesn’t know what genuine means,” he says. “Every single thing he says and does is calculated. He is a machine. He’s like AI, and it’s in the service of growth, growth, growth. You can be disingenuous and media savvy.” […]
How, exactly, is Roher an apocaloptimist? “We are preaching a worldview,” he says, “in a world that’s asking you to either see this as the apocalypse or embrace it with this unbridled optimism.” He and his film are taking a stance that rests between those two poles. “It’s both at the same time. We have to try and embrace a middle ground so this technology doesn’t consume us, so we can stay in the driver’s seat,” says Roher — meaning, it’s up to all of us to chart the course. “You have to speak up,” says Tyrell. “Things like AI should disclose themselves. If your doctor’s office is using an AI bot, you have to say, I don’t like that.” The driving message behind the film is that resistance starts with the people. That position is shared by The AI Doc producer Daniel Kwan, who won an Oscar for directing Everything Everywhere All at Once and has been at the forefront of discussions about AI in the entertainment industry. […]
Roher and Tyrell both use AI in their everyday lives and openly admit to it being a helpful tool. They also agree that this technology can make daily tasks easier for the average consumer. But at the end of our conversation, we get into the economics of AI and how Wall Street is propping up the industry through huge evaluations of these companies — and Roher gets going yet again. “This is all smoke and mirrors. The entire economy of AI is being propped up by a Ponzi scheme. The hype of this technology is unlike any hype we’ve seen,” he says. “I feel like I could announce in a press release that Academy Award winner Daniel Roher is starting an AI film company, and I could sell it the next day for $20 million. It’s fucking crazy.” […] “These people are prospectors, and they are going up to the Yukon because it’s the gold rush.”
Well, the hard job is done, then.
What remains is the trivial matter of enforcement. I guess then can use LLMs to evaluate submissions for the presence of a human factor.