[-] SnotFlickerman@lemmy.blahaj.zone 39 points 7 hours ago

So only the people who refuse to take precautions should be impacted.

Only if transmission between those people doesn't result in a mutation that turns it airborne. That's not an "if" I'd personally like to risk. To assume it will only affect those who don't take precaution is foolish at best and cruelly disingenuous at worst.

Cloudflare can still go bad, but its usually for high-capacity users who are using way more than the average. I haven't seen any homeserver users get hit with any trouble, but I've seen a couple small businesses have bad situations with Cloudflare, although it honestly seems like the minority.

Cloudflare has issues but for most its probably fine.

i always have good success with libgen.is

[-] SnotFlickerman@lemmy.blahaj.zone 53 points 2 days ago* (last edited 2 days ago)

Shocker, the suits who helped force out the original creative team just said "Fuck you and the horse you rode in on" to what was left of the creative team after the departure of Kurvitz, Rostov, and Hindpere.

I know Tuulik wanted the rest of the team to have a chance to do a follow-up, but this aligns with why some promoted the idea that they were happy with the game being pirated: the people holding the purse strings don't actually care about it.

[-] SnotFlickerman@lemmy.blahaj.zone 27 points 2 days ago* (last edited 2 days ago)

People like this are jerks who only share their library with people who already have ultra rare shit. They want to trade rare for rare.

They think they are special for their collections and their actions are anti-thetical to the entire piracy ethos.

I would ignore them and keep searching for another source because its unlikely you have anything they will willingly trade for.

It's dumb and I can't stand those people.

But it makes me laugh when they have their collections locked up but other people are freely sharing what they have locked up.

The stuff I don't want to share... I just don't share at all. No reason to make it frustrating.

get a hole drilled in your tailpipe

[-] SnotFlickerman@lemmy.blahaj.zone 7 points 3 days ago* (last edited 3 days ago)

It's the end of things being easy, that's for sure. But maybe that's okay.

Humanity is in for a wild ride with climate change coming. It will upend entire food chains, let alone nations.

Sure, there's definitely been worse and more unstable periods in history before, but what's coming is very likely going to make those look tame in comparison.

I fully expect Eco-Fascism to take hold at some and the very people who denied the existence of climate change will demand full control of the last vestiges of the planets resources because in their minds only they are smart and capable enough to dole out what's left to the plebeians.

In other words, things have been a hell of a lot worse and could get a hell of a lot worse. Instead of waiting in anticipation for the worst that may inevitably happen, do your best to lead a good, kind, and loving life with the people close to you. Things feel like they're getting worse all the time, and hell, maybe they really are...

But well better to count your blessings now than to waste your life acting like it's all already as bad as it can be or that the badness is just around the corner. Maybe it is just around the corner. Even more reason to savor the little joys of life while you still have them and to build connections in your community while there's still time to build Mutual Aid networks. Those things alone can make a dark future easier to suffer, community and fond memories.

"Ramirez has caught the parking lot frog" sums it up so succinctly.

Musk is only second to Trump in never having learned to shut his stupid fucking mouth.

[-] SnotFlickerman@lemmy.blahaj.zone 81 points 6 days ago* (last edited 6 days ago)

Look around you, we're winning, and we're not even trying. We are literally in the midst of a mass extinction event driven by human behavior.

Maybe roaches will outlast us, but we're headlong into make this planet pretty unlivable for almost all species, let alone ourselves.

submitted 1 month ago* (last edited 1 month ago) by SnotFlickerman@lemmy.blahaj.zone to c/technology@lemmy.world

Copied from Reddit's /r/cscareerquestions:

The US Department of Labor is proposing a rule change that would add STEM occupations to their list of Schedule A occupations. Schedule A occupations are pre-certified and thus employers do NOT have to prove that they first sought American workers for a green card job. This comes on the heels of massive layoffs from the very people pushing this rule change.

From Tech Target:

The proposed exemption could be applied to a broad range of tech occupations including, notably, software engineering -- which represents about 1.8 million U.S. positions, according to U.S. labor statistics data -- and would allow companies to bypass some labor market tests if there's a demonstrated shortage of U.S. workers in an occupation.

Currently the comments include heavy support from libertarian think tank, Cato, and the American Immigration Lawyers Association

The San Francisco Tech scene has been riddled with CEOs whining over labor shortages for the past few months on Twitter/X amidst a sea of layoffs from Amazon, Meta, Google, Tesla, and much more. Now, we know that it's an attempt at influencing the narrative for these rule changes.

If you are having a hard time finding a job, now, this rule change will only make things worse.

From the US Census Bureau:

Does majoring in STEM Lead to a STEM job after graduation?

The vast majority (62%) of college-educated workers who majored in a STEM field were employed in non-STEM fields such as non-STEM management, law, education, social work, accounting or counseling. In addition, 10% of STEM college graduates worked in STEM-related occupations such as health care.

The path to STEM jobs for non-STEM majors was narrow. Only a few STEM-related majors (7%) and non-STEM majors (6%) ultimately ended up in STEM occupations.

If you or someone you know has experienced difficulty finding an engineering job post graduation amidst this so called shortage, then please submit your story in the remaining few days that the Public comment period is still open (ends May 13th.)

Public comment can be made, here:


Please share this with anyone else you feel has will be affected by this rule change.

submitted 1 month ago* (last edited 1 month ago) by SnotFlickerman@lemmy.blahaj.zone to c/news@lemmy.world


Hmm, I wonder why this shadowy organization sounds so... familiar?


I think it might be safe to file this one under "Good News." It sounds like everyone kept their jobs and the union is intact.

The Man Who Killed Google Search (www.wheresyoured.at)

Edward Zitron has been reading all of google's internal emails that have been released as evidence in the DOJ's antitrust case against google.

This is the story of how Google Search died, and the people responsible for killing it.

The story begins on February 5th 2019, when Ben Gomes, Google’s head of search, had a problem. Jerry Dischler, then the VP and General Manager of Ads at Google, and Shiv Venkataraman, then the VP of Engineering, Search and Ads on Google properties, had called a “code yellow” for search revenue due to, and I quote, “steady weakness in the daily numbers” and a likeliness that it would end the quarter significantly behind.

HackerNews thread: https://news.ycombinator.com/item?id=40133976

MetaFilter thread: https://www.metafilter.com/203456/The-core-query-softness-continues-without-mitigation

The Man Who Killed Google Search (www.wheresyoured.at)
submitted 1 month ago* (last edited 1 month ago) by SnotFlickerman@lemmy.blahaj.zone to c/technology@lemmy.world

Edward Zitron has been reading all of google's internal emails that have been released as evidence in the DOJ's antitrust case against google.

This is the story of how Google Search died, and the people responsible for killing it.

The story begins on February 5th 2019, when Ben Gomes, Google’s head of search, had a problem. Jerry Dischler, then the VP and General Manager of Ads at Google, and Shiv Venkataraman, then the VP of Engineering, Search and Ads on Google properties, had called a “code yellow” for search revenue due to, and I quote, “steady weakness in the daily numbers” and a likeliness that it would end the quarter significantly behind.

HackerNews thread: https://news.ycombinator.com/item?id=40133976

MetaFilter thread: https://www.metafilter.com/203456/The-core-query-softness-continues-without-mitigation


Casey's expletive-laden rant continued, "You're being duped by a bunch of grifters and billionaires who don't give a shit about you or your family. They care about their fucking tax breaks and the money they can put in their pocket. If you consider yourself a patriot and you're spouting off that election-denying shit, I will fight your ass outside if you want to. Wake the fuck up!"

submitted 3 months ago* (last edited 3 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/plex@lemmy.ml

In January and February I had curated some playlists and shared them with friends and we watched them together via Watch Together. There was previously an option to Grant Access to the playlist, and after granting access, you could click Watch Together and start a watch party.

However, sometime in the last few weeks this option has disappeared in playlists, and now I am restricted to granting access, but not being able to watch together.

Really the only people who have access to my server is my partner and three friends. This has been a huge bummer, because I was curating old shows complete with old commercials in between.

If anyone has info on why this changed, I'd love to have an understanding, because the change kind of blows...


Archive Options Failing, Text Follows:

Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends

The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights

Minutes after the board of OpenAI fired CEO Sam Altman, saying he failed to be truthful, he exchanged texts with Brian Chesky, the billionaire chief executive of Airbnb.

“So brutal,” Altman wrote to his friend. Later that day, Chesky told Microsoft ’s CEO Satya Nadella, OpenAI’s biggest partner, “Sam has the support of the Valley.” It was no exaggeration.

Over the weekend, Altman rallied some of Silicon Valley’s most influential CEOs and investors to his side, including Vinod Khosla, co-founder of Sun Microsystems and the founder of Khosla Ventures, OpenAI’s first venture-capital investor; Ron Conway, an early investor in Google and Facebook ; and Nadella. Days later, Altman returned as OpenAI’s chief executive.

Altman’s firing and swift reversal of fortune followed a pattern in his career, which began when he dropped out of Stanford University in 2005 and gained the reputation as a Silicon Valley visionary. Over the past two decades, Altman has lost the confidence of several top leaders in the three organizations he has directed. At every crisis point, Altman, 38 years old, not only rebounded but climbed to more powerful roles with the help of an expanding network of powerful allies.

A group of senior employees at Altman’s first startup, Loopt—a location-based social-media network started in the flip-phone era—twice urged board members to fire him as CEO over what they described as deceptive and chaotic behavior, said people familiar with the matter. But the board, with support from investors at venture-capital firm Sequoia, kept Altman until Loopt was sold in 2012.

Two years later, Altman was a surprise pick to head Y Combinator, the startup incubator that helped launch Airbnb and Dropbox , by its co-founder Paul Graham. Graham had once compared Altman with Steve Jobs and said he was one of the “few people with such force of will that they’re going to get what they want.”

Altman’s job as president of the incubator put him at the center of power in Silicon Valley. It was there he counseled Chesky through Airbnb’s spectacular ascent and helped make grand sums for tech moguls by pointing out promising startups.

In 2019, Altman was asked to resign from Y Combinator after partners alleged he had put personal projects, including OpenAI, ahead of his duties as president, said people familiar with the matter.

This fall, Altman also faced a crisis of trust at OpenAI, the company he navigated to the front of the artificial-intelligence field. In early October, OpenAI’s chief scientist approached some fellow board members to recommend Altman be fired, citing roughly 20 examples of when he believed Altman misled OpenAI executives over the years. That set off weeks of closed-door talks, ending with Altman’s surprise ouster days before Thanksgiving.

Altman’s gifts as a deal-maker, talent scout and pitchman helped turn OpenAI into a business some investors now value at $86 billion. The loyalty he engendered through his success mobilized high-profile supporters after his firing and inspired employees to threaten a mass exit.

“A big secret is that you can bend the world to your will a surprising percentage of the time,” Altman wrote in his personal blog two months before his exit from Y Combinator.

Over his career, Altman has shown skill in bending circumstances to his favor. His ability to bounce back will be tested once again. Scrutiny of his management is expected in coming months. OpenAI’s two new board members have commissioned an outside investigation into the causes of the company’s recent turmoil, conducted by Washington law firm WilmerHale, including Altman’s performance as CEO and the board’s reasons for firing him.

“The senior leadership team was unanimous in asking for Sam’s return as CEO and for the board’s resignation, actions backed by an open letter signed by over 95% of our employees. The strong support from his team underscores that he is an effective CEO,” said an OpenAI spokeswoman.

This article is based on interviews with dozens of executives, engineers, current and former employees and friend’s of Altman’s, as well as investors.

Center stage

Altman was a 19-year-old Stanford sophomore studying computer science when he stepped into the limelight at a campus entrepreneur event in 2005. He stood onstage, held up a flip phone and said he had just learned all cellphones would soon have a Global Positioning System, now commonly known as GPS.

Altman asked anyone interested to join him to figure out how best to pair the technologies. He and his co-founders decided on a flip-phone app that would let people track their friends on a map, which Altman would later pitch as a remedy for loneliness.

During a later entrepreneurship competition, Altman impressed Patrick Chung, who had just joined New Enterprise Associates, a venture-capital firm, and was one of the event’s judges. NEA teamed up with Sequoia and offered Altman and his team $5 million to pursue their idea.

Altman dropped out of school and Loopt was born. An early investor was Y Combinator, a startup incubator founded by Paul Graham and his-then girlfriend now-wife, Jessica Livingston. Altman soon became a favorite of Graham’s.

A few years after the company’s launch, some Loopt executives voiced frustration with Altman’s management. There were complaints about Altman pursuing side projects, at one point diverting engineers to work on a gay dating app, which they felt came at the expense of the company’s main work.

Senior executives approached the board with concerns that Altman at times failed to tell the truth—sometimes about matters so insignificant one person described them as paper cuts. At one point, they threatened to leave the company if he wasn’t removed as CEO, according to people familiar with the matter. The board backed Altman.

“If he imagines something to be true, it sort of becomes true in his head,” said Mark Jacobstein, co-founder of Jimini Health who served as Loopt’s chief operating officer. “That is an extraordinary trait for entrepreneurs who want to do super ambitious things. It may or may not lead one to stretch, and that can make people uncomfortable.”

Altman doesn’t recall employee complaints beyond the normal annual CEO review process, according to people familiar with his thinking.

Among the most important relationships that Altman made at Loopt was with Sequoia, whose partner, Greg McAdoo, served on Loopt’s board and led the firm’s investment in Y Combinator around that time. Altman also became a scout for Sequoia while at Loopt, and helped the firm make its first investment in the payments firm Stripe—now one of the most valuable U.S. startups.

Michael Moritz, who led Sequoia, personally advised Altman. When Loopt struggled to find buyers, Moritz helped engineer an acquisition by another Sequoia-backed company, the financial technology firm Green Dot.

“I saw in a 19-year-old Sam Altman the same thing that I see now: an intensely focused and brilliant person whom I was willing to bet big on,” said Chung, now managing general partner of Xfund, a venture-capital firm.

Man versus machine

Graham’s selection of Altman to lead Y Combinator in 2014 surprised many in Silicon Valley, given that Altman had never run a successful startup. Altman nonetheless set a high goal—to expand the family run operation into a business empire.

He made as many as 20 introductions a day, helping connect people in Y Combinator’s orbit. He helped Greg Brockman, the former chief technology officer of Stripe, make a mint selling his shares in the successful payments company to buyers including Y Combinator. Brockman co-founded OpenAI in 2015 and became its president.

Altman turned Y Combinator into an investing powerhouse. While serving as the president, he kept his own venture-capital firm, Hydrazine, which he launched in 2012. He caused tensions after barring other partners at Y Combinator from running their own funds, including the current chief executive, Garry Tan, and Reddit co-founder Alexis Ohanian. Tan and Ohanian didn’t respond to requests for comment

Altman also expanded Y Combinator through a nonprofit he created called YC Research, which served as an incubator for Altman’s own projects, including OpenAI. From its founding in 2015, YC Research operated without the involvement of the firm’s longtime partners, fueling their concern that Altman was straying too far from running the firm’s core business.

Altman believed OpenAI was primed for AI breakthroughs, including artificial general intelligence—an AI system capable of performing intellectual tasks as well as or better than humans. Altman helped recruit Ilya Sutskever from Google to OpenAI in 2015, which attracted many of the world’s best AI researchers.

By early 2018, Altman was barely present at Y Combinator’s headquarters in Mountain View, Calif., spending more time at OpenAI, at the time a small research nonprofit, according to people familiar with the matter.

The increasing amount of time Altman spent at OpenAI riled longtime partners at Y Combinator, who began losing faith in him as a leader. The firm’s leaders asked him to resign, and he left as president in March 2019.

Graham said it was his wife’s doing. “If anyone ‘fired’ Sam, it was Jessica, not me,” he said. “But it would be wrong to use the word ‘fired’ because he agreed immediately.”

Jessica Livingston said her husband was correct.

To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.

For years, even some of Altman’s closest associates—including Peter Thiel, Altman’s first backer for Hydrazine—didn’t know the circumstances behind Altman’s departure.


At OpenAI, Altman recruited talent, oversaw major research advances and secured $13 billion in funding from Microsoft. Sutskever, the company’s chief scientist, directed advances in large language models that helped form the technological foundation for ChatGPT—the phenomenally successful AI chatbot. Sequoia was one of OpenAI’s investors.

As the company grew, management complaints about Altman surfaced.

In early fall this year, Sutskever, also a board member, was upset because Altman had elevated another AI researcher, Jakub Pachocki, to director of research, according to people familiar with the matter.

Sutskever told his board colleagues that the episode reflected a long-running pattern of Altman’s tendency to pit employees against one another or promise resources and responsibilities to two different executives at the same time, yielding conflicts, according to people familiar with the matter.

“Ilya has taken responsibility for his participation in the Board’s actions, and has made clear that he believes Sam is the right person to lead OpenAI,” Alex Weingarten, a lawyer representing Sutskever, said in a statement. He described as inaccurate some accounts given by people familiar with Sutskever’s actions but didn’t identify any alleged inaccuracies.

Altman has said he runs OpenAI in a “dynamic” fashion, at times giving people temporary leadership roles and later hiring others for the job. He also reallocates computing resources between teams with little warning, according to people familiar with the matter.

Other board members already had concerns about Altman’s management. Tasha McCauley, an adjunct senior management scientist at Rand Corp., tried to cultivate relationships with employees as a board member. Past board members chatted regularly with OpenAI executives without informing Altman. Yet during the pandemic, Altman told McCauley he needed to be told if the board spoke to employees, a request that some on the board viewed as Altman limiting the board’s power, people familiar with the matter said.

Around the time Sutskever aired his complaints, the independent board members heard similar concerns from some senior OpenAI executives, people familiar with the discussions said. Some considered leaving the company over Altman’s leadership, the people said.

Altman also misled board members, leaving the impression with one board member that another wanted board member Helen Toner removed, even though it wasn’t true, according to people familiar with the matter, The Wall Street Journal reported.

The board also felt nervous about Altman’s ability to use his Silicon Valley influence, so when members decided to fire him, they kept it a secret until the end. They gave only minutes notice to Microsoft, OpenAI’s most important partner. The board in a statement said Altman had failed to be “consistently candid” and lost their trust without giving specific details.

Altman retreated to his 9,500 square-foot house, which overlooks San Francisco in the city’s Russian Hill neighborhood.

One of his key allies was Chesky. Shortly after Altman was fired, Chesky hopped on a video call with Altman and Brockman, who had been removed from the board that day and quit the company in solidarity with Altman. Chesky asked why it happened. Altman theorized it might have been about the dust-up with Toner or Sutskever’s complaints.

Satisfied that it wasn’t a criminal matter, Chesky phoned Nadella, the Microsoft CEO.

A small group of Silicon Valley power brokers, including Chesky and Conway, advised Altman and worked the phones, trying to negotiate with the board.

The board named Emmett Shear, an OpenAI outsider, as interim CEO, drawing threats to resign by most of the company’s employees. In another lucky turn of fortune for Altman, Shear was an ally and a mentor of Chesky’s.

Together, Chesky and Shear helped clear a path for Altman’s return.

submitted 6 months ago* (last edited 6 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/asklemmy@lemmy.ml

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.

So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?

And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

view more: next ›


1418 post score
45610 comment score
joined 7 months ago