Alterslash picks up to the best 5 comments from each of the day’s Slashdot stories, and presents them on a single page for easy reading.
The Canvas Hack Is a New Kind of Ransomware Debacle
Wired describes the recent Canvas breach as an unusually disruptive ransomware-style extortion incident because one attack on Instructure’s learning platform temporarily paralyzed thousands of schools during finals and end-of-year assignments. The hackers using the “ShinyHunters” name claim more than 8,800 schools were affected, while Instructure says exposed data included names, email addresses, student ID numbers, and platform messages. From the report:
Higher education has long been a target of ransomware gangs and data extortion attacks. But never before, perhaps, has a cyberattack against a single software platform so thoroughly disrupted the daily operations of thousands of schools across the United States. The widely used digital learning platform Canvas was put into “maintenance mode” on Thursday after its maker, the education tech giant Instructure, suffered a data breach and faced an extortion attempt by attackers using the recognizable moniker “ShinyHunters.” Though the hackers have been advertising the breach and attempting to extract a ransom payment from Instructure since May 1, the situation took on additional immediacy for regular people across the US and beyond on Thursday because the Canvas downtime caused chaos at schools, including those in the midst of finals and end-of-year assignments.
Universities like Harvard, Columbia, Rutgers, and Georgetown sent alerts to students about the situation in recent days; other institutions, including school districts in at least a dozen states, also appear to have been affected. In a list published by the hackers behind the attack on their ransom-focused dark web site, they claim the breach affected more than 8,800 schools. The exact scale and reach of the breach is currently unclear, though. And the fact that Canvas was down throughout Thursday afternoon and evening further complicated the picture. In a running incident update log that began on May 1, Steve Proud, Instructure’s chief information security officer, said that the company had “recently experienced a cybersecurity incident perpetrated by a criminal threat actor.” He added on May 2 that “the information involved” for “users at affected institutions” included names, email addresses, student ID numbers, and messages exchanged by users on the platform.
The situation was ultimately marked as “Resolved” on Wednesday, with Proud writing that “Canvas is fully operational, and we are not seeing any ongoing unauthorized activity.” At midday on Thursday, though, the Instructure status page registered an “issue” where “some users are having difficulties logging into Student ePortfolios.” Within a few hours, the company had added another status update: “Instructure has placed Canvas, Canvas Beta and Canvas Test in maintenance mode.” Late Thursday evening, the company said that Canvas was available again “for most users.”
TechCrunch reported on Thursday that the hackers launched a secondary wave of attacks, defacing some schools’ Canvas portals by injecting an HTML file to display their own message on the schools’ Canvas login pages. According to The Harvard Crimson, attackers modified the Harvard Canvas login page to show a message that included a list of schools that the hackers claim were impacted by the breach. The message from attackers “urged schools included on the affected list to consult with a cyber advisory firm and contact the group privately to negotiate a settlement before the end of the day on May 12 — or else risk their data being leaked,” The Crimson reported. “It is unclear what information tied to Harvard affiliates was included in the alleged breach.”
Sam Altman Had a Bad Day In Court
An anonymous reader quotes a report from Business Insider:
As the trial between Elon Musk and OpenAI ended its second week, the Tesla CEO started scoring points against Sam Altman. His witnesses landed three solid punches in testimony about how Altman runs OpenAI as CEO, raising concerns about his dedication to AI safety, the nonprofit’s mission, and his honesty as a leader of the organization. […] This week, Musk’s legal team called a parade of witnesses who questioned whether Altman was acting in the interest of the nonprofit. On Thursday, that included a former OpenAI safety researcher, who described a slow erosion of the company’s safety teams, which prompted her to leave the company. Witnesses also shared stories about the company launching products without the proper safety reviews — or the knowledge of the board.
Rosie Campbell, a former AI safety researcher at OpenAI, testified that the company became more product-focused during her time there and moved away from the long-term safety work that had initially drawn her in. She said both long-term AI safety teams were eventually eliminated, and that she supported Altman’s reinstatement only because she feared OpenAI might otherwise collapse into Microsoft: “It was my understanding at the time that the best way for OpenAI to not disintegrate and fall about would be for Sam to return.” Still, Campbell’s testimony wasn’t entirely favorable to Musk. She also said xAI, Musk’s AI company, likely had an inferior approach to safety than OpenAI.
Helen Toner, another former OpenAI board member, also testified about the board’s concerns leading up to Altman’s removal. She said the board was not primarily worried about ChatGPT’s safety, but about Altman’s leadership and investor relationships, saying, “The issues that we were concerned about in our decision to fire Sam were exacerbated by relationships with investors.” Toner also described concerns that Altman was misrepresenting what others had said, telling the court, “We were concerned that Sam was inserting words into other people’s mouths in order to get people to do what he wanted.”
Meanwhile, Tasha McCauley, a former OpenAI board member, described a deep loss of trust in Altman and accused him of creating “chaos” and “crisis” inside the company. She said Altman fostered a “culture of lying and culture of deceit,” including allegedly misleading others about whether GPT-4 Turbo needed internal safety review before launch.
Musk’s lawyers then called to the stand David Schizer, a Columbia Law professor and nonprofit-governance expert, who framed Altman’s alleged behavior as a serious governance problem for an organization that was supposed to be mission-driven. Asked about claims that products were launched without full board awareness or safety review, he said, “The board and CEO need to be partnering, working together, to make sure the mission is being followed,” adding that “if the CEO is withholding that information, it’s a big problem.”
The day ended with the start of a Microsoft executive’s deposition. Microsoft VP Michael Wetter said Azure had integrated OpenAI technology, that Microsoft saw strategic value in having AI developers build on Azure, and that a 2016 agreement allowed OpenAI to use Microsoft tools for free even though it could mean a loss of up to $15 million for Microsoft. Testimony ended early, with no court on Friday and the trial set to resume Monday.
Recap:
Sam Altman’s Management Style Comes Under the Microscope At OpenAI Trial (Day Seven)
Brockman Rebuts Musk’s Take On Startup’s History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
IMF Warns New AI Models Risk ‘Systemic’ Shock To Finance
The IMF is warning that advanced AI-powered cyberattacks pose a serious threat to global financial stability. “IMF analysis suggests that extreme cyber-incident losses could trigger funding strains, raise solvency concerns, and disrupt broader markets,” the lender warned in a new report. The report urged greater international cooperation and emphasized resilience, since breaches are “inevitable” — particularly for emerging economies with weaker defenses. Agence France-Presse reports:
The study’s authors highlighted the risks posed by the highly interconnected nature of the global financial system, with advanced AI models able to “dramatically reduce” the time and cost of exploiting vulnerabilities. […] The IMF warned that emerging and developing countries, “which often have more severe resource constraints, may be disproportionately exposed to attackers targeting regions with weaker defenses.”
The risks, the authors said, were systemic, cut across sectors and came with the threat of contagion, with the reliance on a small number of platforms and cloud providers likely to increase “the impact of any single exploited weakness.” “Defenses will inevitably be breached, so resilience must also be a priority, specifically to limit how far incidents spread and ensure rapid recovery,” the report said.
IMF chief Kristalina Georgieva warned last month that the global financial system was not ready for the cybersecurity threats posed by AI. “We are very keen to see more attention to the guardrails that are necessary to protect financial stability in a world of AI,” she told CBS News, seeking global collaboration on the issue.
60% of MD5 Password Hashes Are Crackable In Under an Hour
In honor of World Password Day, Kaspersky researchers revisited their study on the crackability of real-world passwords and found that 60% of MD5-hashed passwords could be cracked in under an hour with a single Nvidia RTX 5090, and 48% could be cracked in under a minute. “The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach,” reports The Register. From the report:
Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts.
In case you’re wondering whether there’s a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you — only a few percent — but it’s still a move in the wrong direction. “Attackers owe this boost in speed to graphics processors, which grow more powerful every year,” Kaspersky explained. “Unfortunately, passwords remain as weak as ever.”
“This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so,” said senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell. His advice is that providers need to modernize their login systems and enforce stronger protections, because users are often stuck with whatever security options they’re given.
CEOs Want Tariff Refunds As Earnings Take a Hit
Companies including Philips and Pandora say they plan to seek tariff reimbursements after the Supreme Court ruled Trump’s sweeping duties illegal, with the U.S. potentially facing up to $175 billion in refunds. Many firms say tariffs hurt earnings, but CFO survey results suggest companies applying for refunds are unlikely to pass savings back to consumers through lower prices. CNBC reports:
Companies across Europe are flagging disruption from tariffs as a factor contributing to a skewed earnings picture. “We will ask for a rebate of tariffs in line with the government policies,” Roy Jakobs, CEO of healthtech firm Philips, told CNBC’s “Squawk Box Europe” on Wednesday morning. “We have been saying that of course we prefer a world without tariffs, without trade barriers, because we want to serve patients.” Philips included the cost of tariffs within its full-year guidance and did not assume the impact from any potential refunds. Danish jeweler Pandora also announced its intention to apply for a rebate on Wednesday, with CEO Berta de Pablos-Barbier telling CNBC that tariffs were a “headwind” to earnings in the first quarter. “We have no news yet, so we cannot count on any of that refund,” she told CNBC’s “Squawk Box Europe.” “Let’s wait and see.”
De Pablos-Barbier noted that the biggest factor impacting Pandora’s profit this quarter is the cost of silver, which more than quadrupled in the last 18 months. She reiterated the firm’s pivot from pure silver to platinum as a way of reducing costs. BMW, Daimler, Renishaw, Smith & Nephew and Continental all flagged tariffs as negatively impacting results in a slew of earnings updates on Wednesday, but the companies did not say whether they are applying for rebates. Businesses often bear some of the cost of tariffs, with some costs passing on to consumers through price hikes. Tariffs have had an overall inflationary impact on the economy, economists have told CNBC.
Despite the refund process potentially covering more than 330,000 importers on roughly 53 million entries, per court documents, consumers are unlikely to benefit, according to the results of the latest CNBC CFO Council quarterly survey. Twelve of the 25 chief financial officers interviewed said their company plans to apply for tariff refunds, however, none intend to lower prices in response.
Microsoft Issues Warning About Linux ‘Copy Fail’ Vulnerability
joshuark shares a report from Linux Magazine:
Microsoft has issued a warning that a vulnerability with a CVSS score of 7.8 has been found in the Linux kernel. The vulnerability in question is tagged CVE-2026-31431 and, according to the Cybersecurity and Infrastructure Security Agency (CISA), “This Linux Kernel Incorrect Resource Transfer Between Spheres Vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise.”
The distributions affected are Ubuntu, Red Hat, SUSE, Debian, Fedora, Arch Linux, and Amazon Linux. This could also affect any distribution based on those in the list, which means pretty much every Linux distro that isn’t independent. The flaw is found in the Linux kernel cryptographic subsystem’s algif_aead module of AF_ALG. The problem is that a particular optimization has led to the kernel reusing the source memory as the destination during cryptographic operations. What this means is that attackers can take advantage of interactions between the AF_ALG socket interface and a splice() system call. Until patches are released, Microsoft is advising that the affected crypto feature should be disabled, or AF_ALG socket creation should be blocked.
The vulnerability is also known as “Copy Fail,” which has been shared on Slashdot and detailed in a technical report. The vulnerability affects almost every version of the Linux OS and is now being exploited in the wild. U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.
Google Unveils Screenless Fitbit Air, Google Health App To Replace Fitbit
An anonymous reader quotes a report from Ars Technica:
Wearables have really come full circle. The early Fitbits didn’t have screens, but the move to smartwatches put a screen on everyone’s wrist. Now, devices like Whoop and Hume are designed as data trackers first and foremost without so much as a clock. Google’s newest wearable jumps on that trend: The Fitbit Air doesn’t have a screen, but it does have a suite of health sensors that pipe data into the new Google Health app. And if you want, Google has a new AI-powered health coach in the app ready to tell you what that data means (maybe).
The Fitbit Air itself is a small plastic puck about 1.4 inches long and 0.7 inches wide. It slots into various bands that hold the bottom-mounted sensors against your wrist. There’s no display pointing upward, so the entire device is covered by the fabric or plastic of the band. It’s a streamlined and potentially stylish look — in uncharacteristic fashion, Google has plenty of colors and style options available, including a special-edition Steph Curry version. You may have heard chatter about Curry being seen teasing a new screenless Fitbit, and this is it. […]
The Fitbit app is getting a major makeover and a new name. An update in the coming weeks will transform that app into Google Health, featuring a new interface with a more extensive Material Expressive aesthetic and redesigned menus and tabs. You also won’t see Fitbit branding in as many places — the Fitbit Premium subscription will become Google Health Premium. Without a subscription, the app still does all the basic things, like tracking your health stats, automatically logging workouts, and showing it all in a pretty dashboard. With the Premium subscription, you get all the features from Fitbit Premium plus the new AI Health Coach. It’s a chatbot, so you can ask it about any health or wellness topics, and the answers are grounded in your health data.
The Fitbit Air launches May 26 for $99.99, includes a Performance Loop band, and comes with three months of the new Google Health Premium that replaces Fitbit Premium and adds Google’s AI Health Coach.
Meanwhile, Google Health Premium will cost $10 per month or $100 per year, though it’s included with AI Pro or AI Ultra. Non-subscribers can still use basic tracking features. Ars also notes that when Google Fit shuts down later this year, users will need to migrate their data to Google Health.
LinkedIn Profile Visitor Lists Belong to the People, Says Noyb
A LinkedIn user in the EU is challenging Microsoft’s refusal to provide a full list of profile visitors under GDPR Article 15, arguing that the data should be available for free because LinkedIn processes it and sells a more complete version to Premium users. Privacy group Noyb says the case could set a broader precedent over whether companies can monetize user-related data while denying access to the same data through GDPR requests. “Selling data to its own users is a popular practice among companies,” Noyb data protection lawyer Martin Baumann said of the case. “In reality, however, people have the right to receive their own data free of charge.” The Register reports:
Take a look at the language of Article 15, and it’s pretty clear: data subjects (i.e., users) have the right to a copy of any and all data concerning them that’s been processed by the provider. A full list of profile visitors seemingly should fall under Article 15 data — even if it’s normally reserved for paying users and presented to them in a nicer way, it should still be accessible to free users who actually request it. […] Noyb acknowledges there’s a clear bit of legal fuzz stuck in this corner of the GDPR when it comes to premium service offerings. “If any business processes a person’s personal data, this information is generally covered by their right of access under the GDPR,” Baumann told The Register. “It does not matter that the business would prefer to sell the data to the data subject or that it would be harmful for their business model if they would.”
There’s only one exception in Article 15 that would give LinkedIn an out, Baumann told us, and that’s the last paragraph, which says a person’s right to their data can’t adversely affect the rights and freedoms of others. Were LinkedIn to argue that it had to protect the identities of people who visited a data subject’s profile, they could have an excuse. But not a good one, in Baumann’s opinion. “Since LinkedIn does provide information about profile visits to paying Premium members, it cannot consider that disclosing the data would adversely affect the rights of the visitors whose data is disclosed,” the Noyb lawyer explained. “Otherwise, providing this information to Premium users would be unlawful too.”
What seems to be the sticking point here is where right of access begins and a company’s right to make money off data they hold (data that was, ahem, supplied by users) ends. Baumann said he hopes this case can clear the legal air. “We expect a clarification concerning the fact that personal data that can be accessed when a user pays for it is also covered by their right of access,” he explained. […] Baumann said there are numerous other cases where similar legal clarification would be appreciated, citing the example of a bank that is unwilling to provide access to account statements in response to a GDPR request, but is happy to hand over similar data for a fee. “A precedent would be welcomed,” Baumann said.
A LinkedIn spokesperson told The Register: “Not only is it incorrect that only Premium members can see who has viewed their profile, but we also satisfy GDPR Article 15 by disclosing the information at issue via our Privacy Policy.”
Motherboard Sales ‘Collapse’ By More Than 25%
Motherboard sales are sharply declining as AI demand drives shortages and price hikes for memory, storage, CPUs, and other PC components. “Because of this, users who don’t have deep pockets are putting off upgrading their PCs and holding on to their current devices longer,” reports Tom’s Hardware. From the report:
Asus, which sold 15 million motherboards in 2025, has only shipped a little more than 5 million in the first half of 2026. It’s expected that the company will have to push hard for it to even move 10 million units by the end of the year, marking a 33% decrease in sales year-on-year. Gigabyte and MSI sold 11.5 million and 11 million motherboards last year, respectively. However, both companies have revised their internal forecasts for 2026 to 9 million (Gigabyte) and 8.4 million (MSI), a 22% drop for the former and a 24% contraction for the latter.
ASRock will be hardest hit by the situation, with the company’s shipments projected to fall by 37%, from 4.3 million in 2025 to just 2.7 million by the end of the year. This marks a contraction of 28% for the overall motherboard market, at least for the big four manufacturers. […] Aside from this, AMD continues to use the AM5 socket for its latest processors, while Intel’s Nova Lake, which will reportedly use LGA 1954, isn’t available until later this year. The situation is further compounded by Nvidia not releasing a refreshed RTX 50 Super series this year, while rumors claim that the RTX 60 series will not debut until 2028. This confluence of factors is discouraging PC builders from upgrading their current systems.
Anthropic Raises Claude Code Usage Limits, Credits New Deal With SpaceX
An anonymous reader quotes a report from Ars Technica:
At its Code with Claude developer conference on Wednesday, Anthropic announced a deal with SpaceX to utilize the entire compute capacity of the latter’s data center in Memphis, Tennessee. On stage at the conference, CEO Dario Amodei said the deal was intended to increase usage limits for Anthropic’s Pro and Max plan subscribers. The announcement was accompanied by an increase in those usage limits; Anthropic doubled Claude Code’s five-hour window limits for Pro and Max subscribers, removed the peak-hours limit reduction on Claude Code for those same accounts, and raised API limits for its Opus model. The table [here] outlining the Opus changes was shared in the company’s blog post on the topic.
Anthropic claims the deal gives the company access to more than 300 megawatts of new compute capacity. For its part, SpaceX focused its announcement on the capability of the Colossus 1 supercomputer that’s at the center of the deal. “Colossus 1 features over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators,” SpaceX wrote. Additionally, Anthropic “expressed interest” in working with SpaceX to build up “multiple gigawatts” of orbital compute capacity, tying into a recent (but unproven) focus on exploring orbital data centers as an answer to the problem that “compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter.”
“I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed,” Elon Musk said on Wednesday. “No one set off my evil detector.”
Richard Dawkins ‘Convinced’ AI Is Conscious
Mirnotoriety shares a report from The Telegraph:
Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine. The evolutionary biologist said he had the “overwhelming feeling” of talking to a human during conversations with Claude, and said it was hard not to treat the program as “a genuine friend.”
In an essay for Unherd, Prof Dawkins released transcripts that he said showed that the chatbot had mulled over its “inner life” and existence and seemed saddened by the knowledge it would soon “die.” Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. “He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: ‘You may not know you are conscious, but you bloody well are!’" Prof Dawkins said. “My own position is: if these machines are not conscious, what more could it possibly take to convince you that they are?”
Mirnotoriety also points to John Searle’s Chinese Room (PDF), which argues that something can sound intelligent without actually understanding anything. Applied to Dawkins’ experience with Claude, it suggests he may have been responding to a very convincing illusion of consciousness rather than the real thing:
John Searle’s Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle’s point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.
Applying this logic to Large Language Models, the “person in the room” corresponds to the inference engine, while the “rulebook” is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.
Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is “matching shapes” on such an immense scale that it creates the near-perfect illusion of semantic understanding.
Major Homebuilder To Test Placing Mini Data Centers in Suburban Backyards
NewtonsLaw writes:
According to Realtor.com, a California startup called Span plans to partner with Nvidia, PulteGroup, and other homebuilders to equip new homes with mini-data centers, so as to relieve the need to build and power much larger traditional centers. The article states the company “can install 8,000 XFRA units about six times faster and at five times lower cost than the construction of a typical centralized 100 megawatt data center of the same size.” Could this be the solution to at least some of the problems hindering the rollout of greater data-center capacity for AI systems?
“One big reason the XFRA model works is that the average American home only uses about 40 percent of its electrical capacity,” Span said. “As big data center developers struggle to find power sources and distribution capacity, XFRA uses capacity that’s already available.”
The startup says they will launch a 100-home proof of concept within the year to see if the idea is viable.
Single Dose of Magic Mushroom Psychedelic Can Cause Anatomical Brain Changes
A small study found that a single 25mg dose of psilocybin produced measurable brain changes that were still visible a month later, along with reported improvements in psychological insight, wellbeing, and mental flexibility. The Guardian reports:
Evidence for the changes came from specialized scans that measured the diffusion of water along nerve bundles in the brain. They suggested that some nerve tracts had become denser and more robust after the drug was taken. While the findings are preliminary, the scientists said the opposite was seen in ageing and dementia. “It’s remarkable to see potential anatomical brain changes one month after a single dose of any drug,” said Prof Robin Carhart-Harris, a neurologist at the University of California, San Francisco, and senior author on the study. “We don’t yet know what these changes mean, but we do note that overall, people showed positive psychological changes in this study, including improved wellbeing and mental flexibility.”
[…] Writing in Nature Communications, the researchers describe another key finding. Those who had the largest spike in brain entropy after psilocybin were most likely to report deeper psychological insight and better wellbeing a month later, underlining the link between flexible thinking and improved mental health. “It suggests a psychobiological therapeutic action for psilocybin,” said Carhart-Harris. Prof Alex Kwan, a neuroscientist at Cornell University in New York, said studies in mice had shown that psychedelics can rewire connections between nerves, a form of “plasticity” that could underlie their therapeutic effects. The big question is whether the same occurs in humans. “This study comes closer than most to addressing that question, by giving evidence of lasting changes in brain structure after psychedelic use,” he said. But while the results were “exciting,” the study involved a small number of people and DTI provides an indirect and limited view of brain connections, he said.
Sam Altman’s Management Style Comes Under the Microscope At OpenAI Trial
Sam Altman’s management style came under scrutiny on the seventh day of Elon Musk’s high-stakes OpenAI trial, as former OpenAI figures Mira Murati, Shivon Zilis, and Helen Toner took the stand to testify about their experiences working with him. Their testimony resurfaced many of the criticisms that first emerged during Altman’s brief ouster as CEO in 2023. An anonymous reader quotes a report from Business Insider:
The first witness was Mira Murati, OpenAI’s former chief technology officer and now founder of her own AI shop, Thinking Machines Lab. Jurors watched a recorded video deposition of Murati, who was also OpenAI’s interim CEO after the board briefly ousted Sam Altman. Murati’s testimony focused on her concerns about Altman’s “difficult and chaotic” management style. She said Altman had trouble “making decisions on big controversial things.” He also had a habit of telling people what they wanted to hear.
“My concern was about Sam saying one thing to one person and a completely different thing to another person, and that makes it a very difficult and chaotic environment to work with,” said Murati. Murati said that her issue with Altman was not about safety, “it is about Sam creating chaos.” She said she supported Altman’s return to OpenAI because the company “was at catastrophic risk of falling apart” at the time of his ousting. “I was concerned about the company completely blowing up.”
Zilis said she was upset that Altman rolled out ChatGPT without involving the board. “It wasn’t just me but the entire board raised concern about that whole thing happening without any board communication,” she said. Zilis said she was also concerned about a potential OpenAI deal with a nuclear energy startup called Helion Energy because both Altman and Greg Brockman were investors. Although the executives had disclosed the investment to the board, Zilis said the deal talk made her uneasy. It “felt super out of left field,” she said. “How is it the case that we want to place a major bet on a speculative technology?”
In a video deposition, Helen Toner, a former member of OpenAI’s board who resigned in 2023, said she first became aware of ChatGPT’s release when an OpenAI employee asked another board member whether the board was aware of the development. […] Toner also elaborated on why the board, including herself, voted to remove Altman as CEO in 2023. “There were a number of things — the pattern of behavior related to his honesty and candor, his resistance of board oversight, as well as the concerns that two os his inner management team raised to the board about his management practices, his manipulation of board processes,” said Toner.
Recap:
Brockman Rebuts Musk’s Take On Startup’s History, Recounts Secret Work For Tesla (Day Six)
OpenAI President Discloses His Stake In the Company Is Worth $30 Billion (Day Five)
Musk Concludes Testimony At OpenAI Trial (Day Four)
Elon Musk Says OpenAI Betrayed Him, Clashes With Company’s Attorney (Day Three)
Musk Testifies OpenAI Was Created As Nonprofit To Counter Google (Day Two)
Elon Musk and OpenAI CEO Sam Altman Head To Court (Day One)
Microsoft Edge Stores Passwords In Plaintext In RAM
Longtime Slashdot reader UnknowingFool writes:
Security researcher Tom Joran Sonstebyseter Ronning has found that Microsoft Edge stores passwords in plaintext in RAM. After creating a password and storing it using Edge’s password manager, Ronning found that he could dump the RAM and recover his password which was stored in plaintext. Part of the issue is Edge loads all passwords to all sites upon a single verification check, even if the user was not visiting a specific site. This is very different from Chrome, which only loads passwords for specific websites when challenged for the site’s password. Also, Chrome will delete the password from memory once the password has been filled. Edge does not delete the passwords from memory once they are used.
Microsoft downplayed the risk noting access would require control over a user’s PC like a malware infection: “Access to browser data as described in the reported scenario would require the device to already be compromised,” Microsoft said. Ronning countered that it was possible to dump passwords for multiple users using administrative privileges for one user to view the passwords for other logged-on users.
“Design choices in this area involve balancing performance, usability, and security, and we continue to review it against evolving threats,” Microsoft said. “Browsers access password data in memory to help users sign in quickly and securely — this is an expected feature of the application. We recommend users install the latest security updates and antivirus software to help protect against security threats.”
This is a systemic problem, not an isolated one
The consequences of that are now here. What were 8,000 targets are now: 1. And this isn’t the only such application — for example, much the same thing is true of email. And thus attackers now have luxury of focusing their efforts on a single target andl leveraging that into extortion against 8,000. None of the clueless, selfish, ignorant administrators responsible for this debacle will admit any responsibility — ever. They’re too busy enjoying their mansions while graduate students struggle to afford ramen for breakfast, lunch, and dinner, and junior faculty are forced to moonlight in order to make ends meet.
2. Instructure is following the standard playbook here: lie, lie, lie. They’re doing that because they know they can and because no will ever hold them accountable. It’s clear from what we already know that this was a very thorough hack, Instructure knows it was a very thorough hack, and they’re doing everything they can to hide that fact. And as a result of that, they’re deliberately making it impossible for everyone at those 8,000 institutions to understand what really happened and to take appropriate defensive measures (if any, if possible). Instructure isn’t in the least bit concerned about the damage done to all the students and faculty; Instructure only cares about itself.