(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40748371

在本文中,作者讨论了管理大型项目的复杂性和风险,特别提到了航天飞机计划和臭名昭著的挑战者号灾难。 他们强调了倾听工程师的担忧的重要性,同时也承认需要继续前进并承担经过计算的风险。 作者分享了他们在职业生涯中获得的个人经验和见解,强调了领导力、沟通和问责制的重要性。 他们还涉及道德考虑和资本主义在推动决策中的作用。 总体而言,本文阐述了管理项目的技术专长、商业成功和道德责任之间所需的复杂平衡。

相关文章

原文


I wonder how often things like that happen.

The launch could have gone right, and no one would have known anything about the decision process besides a few insiders. I am sure that on project as complex and as risky as a Space Shuttle, there is always an engineer that is not satisfied with some aspect, for some valid reason. But at some point, one needs to launch the thing, despite the complains. How many projects luckily succeeded after a reckless decision?

In many accidents, we can point at an engineer who foreshadowed it, as it is the case here. Usually followed by blaming those who proceeded anyways. But these decision makers are in a difficult position. Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done. So, whose "no" to ignore? Not Allan's apparently.



Often.

I used to run the nuclear power plant on a US Navy submarine. Back around 2006, we were sailing somewhere and Sonar reported that the propulsion plant was much, much louder than normal. A few days later we didn't need Sonar to report it, we could hear it ourselves. The whole rear half of the ship was vibrating. We pulled into our destination port, and the topside watch reported that oil pools were appearing in the water near the rear end of the ship. The ship's Engineering Officer and Engineering Department Master Chief shrugged it off and said there was no need for it to "affect ship's schedule". I was in charge of the engineering library. I had a hunch and I went and read a manual that leadership had probably never heard of. The propeller that drives the ship is enormous. It's held in place with a giant nut, but in between the nut and the propeller is a hydraulic tire, a toroidal balloon filled with hydraulic fluid. Clearly it had ruptured. The manual said the ship was supposed to immediately sail to the nearest port and the ship was not allowed to go back out to sea until the tire was replaced. I showed it to the Engineer. Several officers called me in to explain it to them. And then, nothing. Ship's Schedule was not affected, and we continued on the next several-week trip. Before we got to the next port, we had to limit the ship's top speed to avoid major damage to the entire propulsion plant. We weren't able to conduct the mission we had planned because the ship was too loud. And the multiple times I asked what the hell was going on, management literally just talked over me. When we got to the next port, we had to stay there while the propeller was removed and remachined. Management doesn't give a shit as long as it doesn't affect their next promotion.

Don't even get me started on the nuclear safety problems.



The correct answer in that case is to go to the Inspector General. That's what they're there for. Leaders sweeping shit under the rug that ends up crippling a fleet asset and preventing tasking from higher is precisely the kind of negligence and incompetence the IG is designed to root out.

And I say that as a retired officer.



Honest question: what are the plausible outcomes for an engineer who reports this kind of issue to the IG?

I'm guessing there's a real possibility of it ending his career, at least as a member of the military.



The IG is an independent entity which exists to investigate misconduct and fraud/waste/abuse. There are Inspectors General at all levels from local bases up to the Secretary of Defense, and they have confidential reporting hotlines. The only thing worse for a commander than having shenanigans be substantiated at an IG investigation is to have been found to tolerate retaliation against the reporters.

Generally about every month or two, a Navy commanding officer gets canned for "loss of confidence in his/her ability to command." They aren't bulletproof, quite the opposite. And leaving out cases of alcohol misuse and/or sexual misconduct, other common causes are things within the IG's purview.



Probably. The biggest blind spot internal auditors have is things that didn't leave a paper trail.

It is too common that such investigations don't even start because there is just one connecting piece of evidence missing.

Leave a paper trail people!



Much more realistically:

Individual A reports a unique or rare problem. Everyone knows it is reported by person A.

Nothing is done.

Person A reports the problem "anonymously" to some third party, which raises a stink about the problem.

Now everyone knows that person A reported the problem to the third party.

This is why I (almost) never blow the whistle. It's an automatic career-ending move, and any protections are make-believe at best.



Then Person A needs to haul their butt to the Defense Service Office, call their Member of Congress, and tell the "anonymous" hotline that they've been retaliated against.

I'm not pretending this is some magic ticket to puppy-rainbow-fairy land where retaliation never occurs, but ultimately, how much do you care about your shipmates? I had a CPO once as one of my direct reports committing major misconduct and threatening my shop with retaliation if they reported it. I could have helped crush the bastard if someone had come forward to me, but no one ever did until I'd turned over the division to someone else, after which it blew up. Sure, he eventually got found out, but still. He was a great con artist and he pulled the wool over my eyes, but all I'd have needed is one person cluing me in to that snake.

Speaking from the senior officer level, we're not all some cabal trying to sweep shit under the rug. And the IGs, as much as they're feared, aren't out to nail people to the wall who haven't legitimately done bad things. I'm sorry you've had the experience you've had, but that doesn't mean that everyone above you was some big blue wall willing to protect folks who've done wrong.



The incompetent group together, they have to in order to survive.

The competent don't group together, they don't need to. They can take care of themselves.

The former uses their power as a group against the individuals in the latter.

Basically the plot of Atlas Shrugged.



Atlas Shrugged? The book written by that demented woman who couldn't deal with her own feelings but told everyone how individualism was the answer to everything while living thanks to other people's support?

That book?



Objectivism, like many philosophies or political beliefs, only works in an absolute vacuum.

Maybe the one person who survives the first trip to Mars can practice it.



I'm not an objectivist. My comment is the extent of the Ayn Rand beliefs I hold for my most part.

When you work on ideas instead of personalities you get to do that.

Nobody here tried to disprove my comment. Just a few people starting complaining about a dead woman whose book I mentioned in passing.

They got together and argued, incompetently. Demonstrating the effect I was attempting to illustrate.



i guess the true fate is the competent arguing amongst one another in an attempt to establish who is most competent, while the incompetent group together and bask in the real rewards. The goals of the incompetent are simple and tangible. The goals of the competent are abstract, as they seek acceptance from their fellow competent peers



Objectivism: that fart-huffing philosophy that leads people to think everyone else is incompetent to judge it, when it's just a bunch of hateful trash that is to the right as Marxism is to the left.



How long retired? Things have gone in what can only be described as an.. incomprehensible unfathomable direction in the last decade or so. Parent post is not surprising in the least.

Politics is seeping where it doesn't belong.

I am very worried.



To a first approximation: https://www.youtube.com/watch?v=KZB7xEonjsc

Less funny in real life. Sometimes the jizzless thing falls off with impeccably bad timing. Right when things go boom. People get injured (no deaths yet). Limp home early. Allies let down. Shipping routes elongate by a sad multiple. And it even affects you directly as you pay extra for that Dragon silicon toy you ordered from China.



Just google the Red Hill failure.

The Navy's careerist, bureaucratic incompetence is staggering. No better than Putin's generals who looted the military budget and crippled his army so they couldn't even beat a military a fraction of their size.



Recently. For those who've served, it's not a surprise to see the constant drumbeat of commanding officers being relieved of command every month or so. COs are not bulletproof, and the last thing anyone in the seat wants is to end up crossways with the IG. And there are confidential ways Sailors can get in touch with them if needed.

Or with their Member of Congress, who can also go to Big Navy and ask "WTF is going on with my constituent?"



> Don't even get me started on the nuclear safety problems.

I want to be pro-nuclear energy, but I just don't think I can trust the majority of human institutions to handle nuclear plants.

What do you think about the idea of replacing all global power production with nuclear, given that it would require many hundreds of thousands of loosely-supervised people running nuclear plants?



There's also the issue of force majeure - war, terrorism, natural disasters, and so on. Increase the number of these and not only can you not really maintain the same level of diligence, but you also increase the odds of them ending up in an unfortunate location or event.

There's also the issue of the uranium. Breeder reactors can help increase efficiency, but they bump up all the complexities/risks greatly. Relatively affordable uranium is a limited resource. We have vast quantities of it in the ocean, but it's not really feasible to extract. It's at something like 3.3 parts per billion by mass. So you'd need to filter a billion kg of ocean water to get 3.3kg of uranium. Outside of cost/complexity, you also run into ecological issues at that scale.



Given the scale of people killed by coal every year, I feel relatively confident that had that effort not been undertaken, it would still be true.

And of course that's ignoring the fact that I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear



I don't think the scale of coal is 200m+ people a year. That's taking artistic liberties or is too hyperbolic to entertain.

>I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear

I don't. Einstein's quote rings alarms in my head here. Imagine all the inane incompetencies you've seen with current energies in your house, or at a mechanic, or simply flickering lights at a resaurant. Now imagine that these people now manage small fusion/fission bombs powering such devices.

we need to value labor a lot more to trust that sort of maintanance. And the US alone isn't too good at that. Let alone most of Asia and EMEA.



Is this a different phenomenon though? It seems that there's a difference between an informed risk assessment and not giving a fuck or letting the bureaucratic gears turn and not feeling responsible. Like there's a difference between Challenger and Chernobyl.

But, maybe someone can make a case that it's fundamentally the same thing?



I would make the case that it's fundamentally the same thing.

In both cases, there were people who cared primarily about the technical truth, and those people were overruled by people who cared primarily about their own lifestyle (social status, reputation, career, opportunities, loyalties, personal obligations, etc.) In Allan McDonald's book "Truth, Lies, and O-Rings" he outlines how Morton Thiokol was having a contract renewal held over their head while NASA Marshall tried to maneuver the Solid Rocket Booster production contract to a second source, which would have seriously affect MT's bottom line and profit margins. There's a strong implication that Morton Thiokol was not able to adhere to proper technical rationale and push back on their customer (NASA) because if they had they would have given too much ammunition to NASA to argue for a second-source for the SRB contracts. (In short: "you guys delayed launches over issues in your hardware, so we're only going to buy 30 SRB flight sets from you over the next 5 years instead of 60 as we initially promised."

I have worked as a NASA contractor on similar issues, although much less directly impacting the crews than the SRBs. You are not free to pursue the smartest, most technically accurate, quickest method for fixing problems; if you introduce delays that your NASA contacts and managers don't like, they will likely ding your contract and redirect some of your company's work to your direct competitors, who you're often working with on your projects.



What’s the alternative? Being able to shift to a competitor when a producer is letting you down is the entire point of private contracts; without that, you might as well remove the whole assemblage of profit and just nationalize the whole thing.



If you're EB, why replace a hydraulic bushing when you can wait, and replace it but also have to repair a bunch of damage and make yourself a nice big extra chunk of change off Uncle Sam?

If you're ship's captain...why not help secure a nice 'consulting' 'job' at EB after retiring from the navy by helping EB make millions, and count on your officers to not say a peep to fleet command that the mess was preventable?



> Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done.

Saying "no" is easy and safe in a world where there are absolutely no external pressures to get stuff done. Unfortunately, that world doesn't exist, and the decision makers in these kinds of situations face far more pressure to say "yes" than they do to say "no".

For example, see the article:

> The NASA official simply said that Thiokol had some concerns but approved the launch. He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.



It’s a shame.

We don’t see software engineers behave ethically in the same way.

Software is filled with so much risk taking and there’s few if any public pushback where engineers are saying the software we’ve created is harmful.

Here’s a few examples:

- Dark patterns in retail

- Cybersecurity flaws in sensitive software (ie. Microsoft)

- Social media and mental health

- Social media and child exploitation / sex trafficking

- Social media and political murder (ie. Riots, assassinations)

This stuff is happening and it’s just shrugs all-around in the tech industry.

I have a ton of respect for those whistleblowers in AI who seem to be the small exception to this rule.



Everyone seems to be reading this too simply. In fact, stupidly.

It's conceptually the easiest answer to the risk of asserting that you are certain, is simply don't assert that you are certain.

They aren't saying it's easy to face your bosses with anything they don't want to hear.



Isn't the definition of "easy" or "hard" that includes the external human pressures the less simple/stupid one? What is the utility of a definition of "easy" that assumes that you work in complete isolation?



The context to this conversation is the launch of a space shuttle that's supposed to carry a teacher to space. It has both enormous stakes and enormous political pressure to not delay/cancel. I'm unsure why that context makes the spherical cow version of "easy" a sensible one.



The context of that word "easy" was not a vacuum, it was part of a sentence which was part of a conversation. There is more than enough of this context to know what in particular was easy.

You can only fail to get this by not reading the thing you are responding to, or deliberate obtuseness, or perhaps by being 12 years old.



> easily be career-ending.

Easily be career ending? That's a bit dramatic, don't you think?. Someone who continuously says no to things will surely not thrive and probably eventually leave the organization, one way or the other, that's probably right.



Considering the launch tempo that NASA had signed up for, and was then currently failing at? Yes, a single 'no-go' on the cert chain could easily result in someone being shunted into professional obscurity thereafter.



Not even slightly dramatic. I have seen someone be utterly destroyed for trying to speak out on something deeply unethical a state was doing, and is probably still doing.

He was dragged by the head of state in the press and televised announcements, became untouchable overnight - lost his career, his wife died a few days later while at work at her government job in an “accident”. This isn’t in some tinpot dictatorship, rather a liberal western democracy.

So - no. Career-ending is an understatement. You piss the wrong people off, they will absolutely fuck you up.



I have long thought that there ought to be an independently funded International Association for the Protection of Whistleblowers. However, it would quickly become a primary target of national intelligence agencies, so I don't know how long it would last.



A "liberal democracy" where the head of state can have random citizens murdered? And I guess despite being an internet anon, you won't name that country because they will come after you and kill your family as well?

That's either a very tall tale or the state is anything but liberal.



> A "liberal democracy" where the head of state can have random citizens murdered?

Abdulrahman Anwar al-Awlaki (also spelled al-Aulaqi, Arabic: عبدالرحمن العولقي; August 26, 1995 – October 14, 2011) was a 16-year-old United States citizen who was killed by a U.S. drone strike in Yemen.

The U.S. drone strike that killed Abdulrahman Anwar al-Awlaki was conducted under a policy approved by U.S. President Barack Obama

Human rights groups questioned why Abdulrahman al-Awlaki was killed by the U.S. in a country with which the United States was not at war. Jameel Jaffer, deputy legal director of the American Civil Liberties Union, stated "If the government is going to be firing Predator missiles at American citizens, surely the American public has a right to know who's being targeted, and why."

https://en.m.wikipedia.org/wiki/Killing_of_Abdulrahman_al-Aw...



>Abdulrahman al-Awlaki's father, Anwar al-Awlaki, was a leader of al-Qaeda in the Arabian Peninsula

Missed highlighting that part. The boy also wasn't the target of the strike anyway. Was the wife from the other user's story living with an al-Qaeda leader as well?



> Abdulrahman al-Awlaki's father, Anwar al-Awlaki, was a leader of al-Qaeda in the Arabian Peninsula

You are a terrorist if you don't want a foreign power to install a government* over you and you fight to prevent that?

And then further, if your dad does that you should die?

*that has to be noted were literally pedophiles



I’ve spoken about it here somewhat and circumspectly before - but I prefer to keep the SNR low, as I don’t want repercussions for him. Me, good luck finding.

It’s the U.K. It happened under Cameron. It related to the judiciary. That’s as much as I’ll comfortably reveal.

I will also say that it was a factor in me deciding to sell my business, leave the country, and live in the woods, as what I learned from him and his experience fundamentally changed my perception of the system in which we live.



Can someone explain why every govt official that was ever in the news talking about Snowden acuse him of being the worst sort of criminal? Specifically what is the case, they are never forthcoming about details.

I personally am very glad to know the things he revealed.



For the same reason they’ve been torturing Assange for the past decade. They view us as little more than taxable cattle that should not ask any questions, let alone embarrass or challenge the ruling class.



> Saying no isn't what ended his career.

Within NatSec, saying No to embarrassing the government is implied. Ceaselessly.

Equally implied: The brutality of the consequences for not saying no.



My understanding of the Space Shuttle program is that there were a lot of times they knew they probably shouldn't fly, or try to land, and they lucked out and didn't lose the orbiter. It is shocking they only lost two ships out of the 135 Space Shuttle missions.

The safety posture of that whole program, for a US human space program, seemed bad. That they chose to use solid rocket motors shows that they were willing to compromise on human safety from the get-go. There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.



> There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.

That's about to not be true. Atlas V + starliner has flown two people and has strap-on boosters, I think it only gets the rating once it returns from the test flight though.

The shuttle didn't have a propulsive launch abort system, and could only abort during a percentage of its launch. The performance quoted for starliner's abort motor is "one mile up, and one mile out" based on what the presenter said during the last launch. You're plenty safe as long as you don't intersect the SRB's plume.



I forgot about the SLS until after I wrote that. SLS makes most of the same mistakes, plus plenty of new expensive ones, from the Space Shuttle program. SLS has yet to carry a human passenger though.

Its mind boggling that SLS still exists at all. At least $1B-$2B in costs whether you launch or not. A launch cadence measured in years. $2B-$4B if you actually launch it. And it doesn't even lift more than Starship, which is launching almost quarterly already. This before we even talk about reusability, or that a reusable Starship + Super Heavy launch would only use about $2M of propellent.



It happens extremely frequently because there is almost no downside for management to override the engineers decision.

Even in the case of the Challenger, no single article say WHO was the executive that finally approved the launch. No body was jailed for gross negligence. Even Ricahrd Feynman felt that the investigative comission was biased from the start.

So, since there is no "price to pay" to make this bad calls they are continuously made.



    > Even in the case of the
    > Challenger, no single article
    > say WHO was the executive
    > that finally approved the launch.
The people who made the final decision were Jerald Mason (SVP), Robert Lund, Joe Kilminster and Calvin Wiggins (all VP's).

See page 94 of the Rogers commission report[1]: "a final management review was conducted by Mason, Lund, Kilminster, and Wiggins".

Page 108 has their full names as part of a timeline of events at NASA and Morton Thiokol.

1. https://sma.nasa.gov/SignificantIncidents/assets/rogers_comm...



> No body was jailed for gross negligence

Jailing people means you'll have a hard time finding people willing to make hard decisions, and when you do, you may find they're not the right people for the job.

Punishing people for making mistakes means very few will be willing to take responsibility.

It will also mean that people will desperately cover up mistakes rather than being open about it, meaning the mistakes do not get corrected. We see this in play where manufacturers won't fix problems because fixing a problem is an admission of liability for the consequences of those problems, and punishment.

Even the best, most conscientious people make mistakes. Jailing them is not going to be helpful, it will just make things worse.



> Punishing people for making mistakes means very few will be willing to take responsibility.

That’s what responsibility is: taking lumps for making mistakes.

If I make a mistake on the road and end up killing someone, I can absolutely be held liable for manslaughter.

I don’t know if jail time is the right answer, but there absolutely needs to be some accountability.



Have you ever made a mistake on the road that luckily did not result in anyone getting killed?

During WW2, a B-19 crash landed in the Soviet Union. The B-29's technology was light-years ahead of Soviet engineering. Stalin demanded that an exact replica of the B-29 be built. And that's what the engineers did. They were so terrified of Stalin that they carefully duplicated the battle damage on the original.

Be careful what you wish for when advocating criminal punishment.



Tu-4 was indeed a very close copy of B-29, but no, they did not "carefully duplicate the battle damage" on the original. The one prominent example of copying unnecessary things that is usually showcased in this instance is a mistakenly drilled rivet hole in one of the wings that was carefully reproduced thereafter despite there not being any evident purpose for it.

That said, even then Tu-4 wasn't a carbon copy. Because US used imperial units for everything, Soviets simply couldn't make it a carbon copy because they could not e.g. source plating and wire of the exact right size. So they replaced it with the nearest metric equivalents that were available, erring on the side of making things thicker, to ensure structural integrity - which also made it a little bit heavier than the original. Even bigger changes were made - for example, Tupolev insisted on using existing Soviet engines (!), weapons, and radios in lieu of copying the American ones. It should be noted that Stalin really did want a carbon copy originally, and Tupolev had to fight his way on each one of those decisions.



We should not blame people for honest mistakes. Challenger was not an honest mistake, it was political pressure overriding engineering. The joints were not supposed to leak at all, yet they were leaking every time and it was being swept under the rug. When someone suddenly demands to get it in writing when it was normally a verbal procedure they *know* there's a problem. That's not a mistake.

Same as the insulation damage to the tiles kept being ignored until Columbia barely survived. And then they fixed the part they blamed for that incident, but the tiles kept coming back damaged.

And look at what else was going wrong that day--the boosters would most likely have been lost at sea if the launch had worked.



From the very start they were obviously in cover-up mode.

They had every engineer involved with the booster saying launching in the cold was a bad idea, yet they started by trying to look at all the ways it could have gone wrong rather than even looking into what the engineers were screaming about.

We also have them claiming a calibration error with the pyrometer (the ancestor of the modern thermometer you point at something) even though that made other numbers not make sense.



The "who" was William R. Lucas.

There was a recent Netflix documentary where they interviewed him. He was the NASA manager that made the final call.

On video, he flatly stated that he would make the same decision again and had no regrets: https://www.syfy.com/syfy-wire/netflix-challenger-final-flig...

I had never seen anyone who is more obviously a psychopath than this guy.

You know that theory that people like that gravitate towards management positions? Yeah... it's this guy. Literally him. Happy to send people into the meat grinder for "progress", even though no actually scientific progress of any import was planned for the Challenger mission. It was mostly a publicity stunt!



> at some point, one needs to launch the thing, despite the complains

There's a big difference between "complaints" because something is not optimal, and warnings that something is a critical risk. The Thiokol engineers' warnings about the O-rings were in the latter category.

And NASA knew that. The summer before the Challenger blew up, NASA had reclassified the O-rings as a Criticality 1 flight risk, where they had previously been Criticality 1R. The "1" meant that if the thing happens the shuttle would be lost--as it was. The "R" meant that there was a redundant component that would do the job if the first one failed--in this case there were two O-rings, primary and secondary. But in (IIRC) June 1985, NASA was told by Thiokol that the primary O-ring was not sealing so there was effectively no redundancy, and NASA acknowledged that by reclassifying the risk. But by the rules NASA itself had imposed, a Criticality 1 (rather than 1R) flight risk was supposed to mean the Shuttle was grounded until the issue was fixed. To avoid that, NASA waived the risk right after reclassifying it.

> at some point, one needs to say "yes" and take risks, otherwise nothing would be done

Taking calculated risks when the potential payoff justifies it is one thing. But taking foolish risks, when even your own decision making framework says you're not supposed to, is quite another. NASA's decision to launch the Challenger was the latter.



> But at some point, one needs to launch the thing, despite the complains.

Or: at some point, one decides to launch the thing.

You are reducing the complaints of an engineer as something inevitable and unimportant, as if it happened in every lunch, and in every lunch someone decided to went ahead, because it was what was needed.



A lot of people are taking issue with the fact that you need to say yes for progress. I don’t know how one could always say no and expect to have anything done.

Every kind of meaningful success involves negotiating risk instead of seizing up in the presence of it.

The shuttle probably could have failed in 1,000 different ways and eventually, it would have. But they still went to space with it.

Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.



The key thing with Challenger is that the engineers working on the project estimated the risk to be extremely high and refused to budge, eventually being overruled by the executives of their company.

That's different than the engineers calculating the risk of failure at some previously-defined-as-acceptable level and giving the go-ahead.



> I would accept a 50% risk of death.

No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.

It’s a little different 3 to 4 months out. It’s way different the night before and morning. Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.

There’s a small percent who will go forward but admit the fact that they were completely afraid- and rightly so.

Then you have that tiny percentage that are completely calm and you’d swear had a tiny smile creeping in…

I’ve never been an astronaut.

But I did spend three years in and out of Bosnia with a special operations task force.

Honestly? I have a 1% rule. The things might have a 20-30% chance of death of clearly stupid and no one wants to do. Things will a one in a million prob aren’t gonna catch ya. But I figure that if something does, it’s gonna be an activity that I do often but has a 1% chance of going horribly wrong and that I’m ignoring.



> sounds like the sayings of someone who has not ever seen a 50% of death

Well, this sounds like simple ad-hominem. I appreciate your insight, overall, though.

Many ideologically-driven people, like war field medics, explorers, adventurers, revolutionaries, and political martyrs take on very high risk endeavors.

I would also like to explore unknown parts of the Moon despite the risks, even if they were 50%. And I would wholeheartedly try to do it and put myself in the race, if not for a disqualifying condition.

There is also the matter of controllable and uncontrollable risks of death. The philosophy around dealing with them can be quite different. From my experience with battlefield medicine (albeit limited to a few years), I accepted the risks because the cause was worth it, the culture I was surrounded by was to accept these risks, and I could steer them by taking precautions and executing all we were taught. No one among the people I trained with thought they couldn't. And yes, many people ultimately dropped out for it, as did I.

Strapping oneself to a rocket is a very uncontrollable risk. The outcome, from an astronaut's perspective, is more random. I think that offers a certain kind of peace. We are all going to die at random times for random reasons, I think most people make peace with that, especially as they go into old age. That is a more comfortable type of risk for me.

Individuals have different views on mortality. Some are more afraid than others, some are afraid in one set of circumstances but not others. Some think that doing worthwhile things in their lives outweighs the risk of death every time. Your view is valid, but so is others'.



> Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.

Something like 10 million people will accept those odds. Let's say 1 million are healthy enough to actually go to space and operate the machinery. Then let's say 99% will back out during the process. That's still 10,000 people to choose from, more than enough for NASA's needs.



> No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.

The space program pilots saw it. And no, I would not have flown on those rockets. After all, NASA would "man rate" a new rocket design with only one successful launch.



Using the space shuttle program as a comparison, because it's easy to get the numbers. There were 13 total deaths (7 from Challenger, 6 from Columbia [0]) during the program. Over 135 missions, the Space Shuttle took 817 people into space. (From [1], the sum of the "Crew" column. The Space Shuttle carried 355 distinct people, but some were on multiple missions.)

So the risk of death could be estimated as 2/135 (fatal flights / total flights) or as 13/817 (total fatalities / total crew). These are around 1.5%, must lower than a 50% chance of death.

This is not to underplay their bravery. This is to state that the level of bravery to face a 1.5% chance of death is extremely high.

[0] https://en.wikipedia.org/wiki/List_of_spaceflight-related_ac... [1] https://en.wikipedia.org/wiki/List_of_Space_Shuttle_missions



If I recall correctly, the Saturn V was man rated after one launch. There were multiple failures on the moon missions that easily could have killed the astronauts.

The blastoff from the moon had never been tried before.



> Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.

It's possible you're just suicidal, but I'm reading this more as false internet bravado. A 50% risk of death on a mission to space is totally unacceptable. It's not like anyone will die if you don't go now; you can afford to take the time to eliminate all known risks of this magnitude.



Not bravado at all, if I was given those odds today, I would put all my effort into it and go.

There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...



> Not bravado at all, if I was given those odds today, I would put all my effort into it and go.

If that's actually true, you should see a therapist.

Given we have a track record of going to the moon with much lower death rate than 50%, that's a proven higher risk than is necessary. That's not risking your life for a cause, because there's no cause that benefits from you taking this disproportionate risk. It's the heroism equivalent of playing Russian Roulette a little more than 3 times and achieves about as much.

> There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...

And for every one of those there's 100 keyboard cowboys on the internet who have never been within a mile of danger and have no idea how they'll react to it.

I would say I'm more ideologically driven than most, and there are a handful of causes I'd like to think I'd die for. But I'm also self-aware enough to know that it's impossible to know how I'll react until I'm actually in those situations.

And I'll reiterate: you aren't risking your life for a cause, because there's no cause that benefits from you taking a 50% mortality risk on a trip to the moon.



I think you may be projecting, because you are acting a bit like a keyboard warrior — telling others to see therapists. Consider that other people have different views, that is all. To some, the cause (principle/life goal) of exploring where others have not gone is enough.



Let me be clear; there are 2 options:

1. Go where others have not gone, with a 50% risk of death.

2. Wait 5 days for temperatures to rise, and go where others have not gone, with a 0.5% risk of death.

Choosing 1 isn't "different views, that is all", it's pretty objectively the wrong choice. It's not dying for a cause, it's not brave, it's not idealistic. It's pointlessly suicidal. So yes, I'm saying if you think 1 is the right choice you should see a therapist.

Notably, NASA requires all astronauts to undergo psychological evaluation, even if they aren't claiming they'll take insane unnecessary risks. So it's not like I'm the only one who thinks talking to someone before you potentially kill yourself is a good idea.



Can't we apply the same logic to the current Starliner situation. There's no way it should have launched, but someone brow beat others into saying it was an acceptable risk with the known issues to go ahead with the launch. Okay, so the launch was successful, but other issues that were known and suspect then caused problems after launch to the point they are not positive it can return. So, should it have launched? Luckily, at least to this point, nobody has been hurt/killed, and the vehicle is somewhat still intact.



There are mitigations (of a sort) for the Starliner. It probably should not have launched, but now that it has, the flight crew is no longer in danger and can be brought down via Crew Dragon if necessary (as if Boeing needs any more embarrassment). If I was NASA, I'd take that option; though actual danger to the astronauts coming down in the Starliner seems minimal, having SpaceX do the job just seems safer.

As it is, NASA is keeping the Starline in orbit to learn as much as possible about what's going on with the helium leaks, which are in the service module, which won't be coming back to earth for examination.



> at some point, one needs to say "yes" and take risks

Do they though? If the Challenger launch had been pushed back what major effects would there have been?

I do get your general point but in this specific example it seems the urgency to launch wasn’t particularly warranted.



you need to establish which complaints can delay a launch. The parent comment is arguing that you need to set some kind of threshold on that. In practice, airplanes fly a little bit broken all the time. We have excellent data and theory and failsafes which allow that to be the case, but it's written in blood.



That is a very uncharitable thing to say unless you have proof.

What was the public sentiment of the Shuttle at the time? What was Congress sentiment? Was there organizational fear in NASA that the program would be cancelled if launches were not timely?



Destin (from Smarter Every Day Youtube channel fame) has concerns about the next NASA mission to the moon (named Artemis): https://youtu.be/OoJsPvmFixU

Read the comments (especially from NASA engineers). It's pretty interesting that sometimes it takes courageous engineers to break the spell that poor managers can have on an organization.



Hard disagree. The idea that the machinery your life will depend on might be made with half-assed safety in mind is definitely not part of the deal.

Astronauts (and anyone intelligent who intentionally puts themselves in a life-threatening situation) have a more nuanced understanding of risk than can be represented by a single % risk of death number. "I'm going to space with the best technology humanity has to offer keeping me safe" is a very different risk proposition from "I'm going to space in a ship with known high-risk safety issues".



> the best technology humanity has to offer keeping me safe

Nobody can afford the best technology humanity has to offer. As one adds more 9's to the odds of success, the cost increases exponentially. There is no end to it.



True, but that's semantics at best--as the other post said, if something is better but humans can't afford it, then it's better than humanity has to offer. In the context of this conversation, there were mitigations which was very much within what could be afforded: wait for warmer temperatures, spend some money on testing instead of stock buybacks.



> Hard disagree. The idea that the machinery your life will depend on might be made with half-assed safety in mind is definitely not part of the deal.

It's definitely built in. The Apollo LM was .15mm thick aluminum, meaning almost any tiny object could've killed them.

The Space Shuttle flew with SSRB's that were solid-fuel and unstoppable when lit.

Columbia had 2 ejection seats, which were eventually taken out and not installed on any other shuttle.

Huge risk is inherently the deal with space travel, at least from its inception until now.



Without links to more information on these engineering decisions, I don't think I'm qualified to evaluate whether these are serious risks, and I don't believe you are either. I tend to listen to engineers.



What makes you say it "could have gone right"? From what came out about the o-rings behavior at cold temperatures, it seems they were taking a pretty big risk. Your perspective seems to be that it's always a coin toss no matter what, and I don't think that is true. Were there engineers speaking up in this way at every successful launch too?



Actually, had it been winder that day it might have gone right.

There were 8 joints. Only one failed, and only in one place. The spot being supercooled by boiloff from the LOX tank. And the leak self-sealed (there's aluminum in the fuel--hot exhaust touching cold metal deposited some of it) when it happened--but the seal wasn't robust enough and eventually shook itself apart.



I think what they were saying, especially given the phrasing “How many projects luckily succeeded after a reckless decision?” is that, if things hadn’t failed we would never have known and thus how many other failures of procedure/ ethics have we just not seen because the worst case failed to occur.



I've always thought the same, that something like space travel is inherently incredibly dangerous. I mean surely someone during the Apollo program spoke out about something. Like landing on the moon with an untested engine being the only way back for instance.

Nixon even had a 'if they died' speech prepared, so someone had to put the odds of success not at 100.



I think the deal was there was already a pretty high threshold for risk. I don't know the percentage exactly but the problem was the o-ring thing put it over the threshold which should triggered a a no-go.

For example, you could say "we'll tolerate a 30% chance of loss of life on this launch" but then an engineer comes up and says "an issue we found puts the risk of loss of life at 65%". That crosses the limit and procedure means no launch. What should not happen is "well, we're going anyway" which is what happened with Challenger.



I doubt in a bureaucracy as big and political as NASA saying "no" is never easy or safe. In an alternate timeline (one where the Challenger launch succeeded) it would have been interesting to track McDonald's career after refusing to sign.



That's the thing I always wonder about these things.

It's fun and easy to provide visibility into whoever called out an issue early when it does go on to cause a big failure. It gives a nice smug feeling to whoever called it out internally, the reporters who report it, and the readers in the general public who read the resulting story.

The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times. How many of those things went on to cause a big, or even small, failure, and how many were nothingburgers in the end. Without that, it's hard to say whether leaders were appropriately downplaying "chicken little" warnings to satisfy a market or political need, and got caught by one actually being a big deal, or whether they really did recklessly ignore a called-out legitimate risk. It's easy to say you should take everything seriously and over-analyze everything, but at some point you have to make a move, or you lose. You don't get nearly as much second-guessing when you spend too much time analyzing phantom risks and end up losing to your competitors.



> The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times.

I'm not sure that's important at all. Every issue raised needs to be evaluated independently. If there is strong evidence that a critical part of a space shuttle is going to fail there should be zero discussion about how many times in the past other people thought other things might go wrong when in the end nothing did. What matters is the likelihood that this current thing will cause a disaster this time based on the current evidence, not on historical statistics

The point where you "have to make a move" should only come after you can be reasonably sure that you aren't needlessly sending people to their deaths.



> But at some point, one needs to launch the thing

Do they? Even if risks are not mitigated and say risk for catastrophe can't be pushed below ie 15%? This ain't some app startup world where failure will lose a bit of money and time, and everybody moves on.

I get the political forces behind, nobody at NASA was/is probably happy with those, and most politicians are basically clueless clowns (or worse) chasing popularity polls and often wielding massive decisive powers over matters they barely understand at surface level.

But you can't cheat reality and facts, not more than say in casino.



Maybe it's a bad analogy given the complexity of a rocket launch, but I always think about European exploration of the North Atlantic. Huge risk and loss of life, but the winners built empires on those achievements.

So yes, I agree that at some point you need to launch the thing.



For the ones doing the colonizing? Overwhelmingly yes. A good potion of the issues with colonizing is about how the colonizing nations end up extracting massive amounts of resources for their own benefit.



In context, it sounds like you think that the genocide of indigenous peoples was totally worth it for European nations and that callous lack of concern for human life and suffering is an example to be followed by modern space programs.

I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.



You are not reading the context correctly. The original point was that establishing colonies was very risky, to which whyever implied that colonialism was not a success story. But in fact it was extremely successful from a risk analysis point of view. Some nations chose to risk lives and it paid off quite well for them. The nuance of how the natives were treated is frankly irrelevant to this analysis, because we're asking "did the risk pay off", not "did they do anything wrong".



I am not participating in amoral risk/reward analysis, and you should not be either.

If the cost was genocide or predictable and avoidable astronaut deaths, the risk didn't pay off; there's no risk analysis. This isn't "nuance" and there is no ambiguity here, it's literally killing people for personal gain.



> In context, it sounds like you think that the genocide of indigenous peoples was totally worth it for European nations and that callous lack of concern for human life and suffering is an example to be followed by modern space programs.

Can you provide a quote of where I said this is an example to be followed"? (This is a rhetorical question: I know you can't because I said nothing remotely akin to that.)

> I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.

Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.

If you see "colonization benefited the people doing the colonizing" and interpret it as "colonization is an example to be followed", that's entirely something wrong with your reading comprehension.

You're not "cutting me some slack" by putting words in my mouth and then saying "but maaybe didn't mean that", and it's incredibly dishonesty and shitty of you to pretend you are.



> Can you provide a quote of where I said this is an example to be followed"?

People can read the context of what you said, there's no need to quote it.

In fact, I would advise you to read the context of what you said; if you don't understand why I interpreted your comment the way I did, maybe you should read the posts chain you responded to and that will help you understand.

> Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.

Well, what you said, you said in a context. If you weren't following the conversation, you didn't have to respond, and you can't blame other people for trying to understand your comments as part of the conversation instead of in isolation.

Even if you said what you said oblivious to context, then I have to say, if you meant exactly what you said, then my response is that a risk/reward analysis which only considers economic factors and ignores human factors is reprehensible.

There is not a situation which exists in reality where we should be talking about economic success when human lives are at stake, without considering those human lives. If you want to claim "I wasn't talking about human life", then my response is simply, you should have been talking about human life because the actions you're discussing killed people and that the most important factor in understanding those events. You don't get to say "They took a risk and it paid off!" when the "risk" was wiping out entire populations--that's not a footnote or a minor detail, that's the headline.

The story of the Challenger disaster isn't "they took a risk ignoring engineers and lost reputation with the NASA client"--it's "they risked astronaut's lives to win reputation with the NASA client and ended up killing people". The story of colonizing North America isn't "they took a risk on exploring unknown territories and found massive new sources of resources" it's "they sacrificed the lives of sailors and soldiers to explore unknown territories, and then wiped out the inhabitants and took their resources".



Isn't it fairly obvious from history that you and the Renaissance-era colonizers calculate morality differently? You speak of things that should not be, but nonetheless were. The success of colonialism to the colonizers is obvious. Natives of the New World were regarded as primitives, non-believers, less than human. We see the actions of the European powers as abhorrent now, but 500 years ago they simply did not see things the way we do, and they acted accordingly.



What exactly is your point in the context of this conversation?

I'm a modern person, I have modern morality? Guilty as charged, I guess.

We're supposed to cut them some slack because they were just behaving as people of their time? Nah, I don't think so: there are plenty of examples of people at that time who were highly critical of colonialism and the treatment of indigenous people. If they can follow their moral compass so could Columbus and Cortez. "Everyone else was doing it" is not an excuse adults get to use: people are responsible for their own actions. As for their beliefs: they were wrong.

There are other points you could be making but I really hope you aren't making any of the other ones I can think of.



I think ultimately the problem is of accountability

If the risks are high and there are a lot of warning signs, there needs to be strong punishment for pushing ahead anyways and ignoring the risk

It is much too often that people in powerful positions are very cavalier with the lives or livelihoods of many people they are supposed to be responsible for, and we let them get away with being reckless far too often



> Maybe it's a bad analogy given the complexity of a rocket launch, but I always think about European exploration of the North Atlantic. Huge risk and loss of life, but the winners built empires on those achievements.

> So yes, I agree that at some point you need to launch the thing.

This comment sounds an awful lot like you think the genocide of indigenous peoples is justified by the fact that the winners built empires, but I'd like to assume you intended to say something better. If you did intend to say something better, please clarify.



>Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done.

True, but that is for cases where you take the risk yourself. If the challenger crew knew the risk and were - fuck it - it's worth it it would have been different than a bureaucrat chasing a promotion.



Especially when that bureaucrat probably suffered no consequences for making the wrong call. Essentially letting other people take all of the risk while accepting none. No demotion, no firing, and even if they did get fired they probably got some kind of comfy pension or whatever

It's a joke



Boisjoly was Macdonald's peer at Thiokol. Ebeling (I think) was either his direct manager or his division director.

Boisjoly quit Thiokol after the booster incident. Macdonald stayed, and was harassed terribly by management. He took Thiokol to court at least once (possibly twice) on wrongful discrimination / termination / whistleblower clauses, and won.



I just listened to the audio book on spotify, free for premium members, and I'm wondering if that's why I'm seeing so much about the Challenger disaster lately. Well worth a listen, and spends a great deal of time on setup for these key individuals who tried so hard to avert this disaster.



This is an ever recurring theme in the human condition.

McDonald’s loyalty was not beholden to his bosses, or what society or the country wanted at that moment in time. He knew a certain truth, based on facts he was aware of, and stuck by them.

This is so refreshing in todays world, where almost everyone seems to be a slave to some kind of groupthink, at least in public.



We all celebrate a hero who stands for what they believe or know to be right. When they stand alone we admire their steadfastness while triumphant music plays in the background.

In real life we can't stand these people. They are always being difficult. They make mountains out of every molehill. They can never be reasonable even when everyone else on the team disagrees with them.

Please take a moment to reflect on how you treat inconvenient people in real life.



In corporate world, everything must be tame and beige. Conflict or differences of opinion are avoided to focus on the areas where everyone agrees. It’s exhausting sometimes to try and change methodologies. Introducing new technology can cause so much headache that many passive leaders just shun it in favor of keeping the peace.



Exactly the concept why you don't want to let whatever dashboards/alerts/etc you maintain on your systems have a "normal amount of reds/fails/spurious texts".

At some point you become immune.

It's a lot harder to notice theres 4 red lights today than the usual 2-3 vs noticing 1 when there are normally exactly 0.



Yes. The causative issue is the way in which projects are managed. Employees have no ownership of the project. If employees had ownership over which changes they think are best, a good employee would act on bringing the alerts back to zero before they take on new features or a new project. There are some obstacles:

1. Employees not having a say in which issues to work on. This pretty much leads to the death of a project in the medium term due to near-total disregard of maintenance issues and alerts.

2. Big-team ownership of a project. When everyone is in charge, no one is. This is why I advocate for a team size of exactly two for each corporate project.

3. Employees being unreasonably pressured for time. Perhaps the right framing for employees to think about it is: "If it were their own business or product, how would they do it?" This framing, combined with the backlog, should automatically help avoid spending more time than is necessary on an issue.



Not making an ethical/moral judgement here, just a practical one - is there any reason to believe that giving employees ownership of the projects will be any better than having "management" own it if all factors were truly considered ?

If every decision an employee made on features/issues/quality/time was accompanied by how much their pay was affected, would the outcomes really be better ?

The team could decide to fix all bugs before taking on a new feature, or that the 2 month allotment to a feature should really be three months to do it "right" without having to work nights/weekends, would the team really decide to do that if their paycheck was reduced by 10%, or delayed for that extra month for those new features were delivered ?

If all factors were included in the employee decision process, including the real world effect of revenue/profit on individual compensation from those decisions, it is not clear to me that employees would make any "better" decisions.

I would think that employees could be even more "short sighted" than senior management, as senior management likely has more at stake in terms of company reputation/equity/career than an employee who can change jobs easier, and an employee might choose not to "get those alerts to zero" if it meant they would have more immediate cash in their pocket.

And how would disagreements between team members be worked out if some were willing to forgo compensation to "do it right', and others wanted to cut even more corners ?

Truly having ownership means you have also financial risk.



> is there any reason to believe that giving employees ownership of the projects will be any better than having "management" own it

Non-technical management's skill level is almost always overrated. They're almost never qualified for it. Ultimately it still is management's decision, and always will be. If however management believes that employees are incapable of serving users, then it's management's fault for assigning mismatched employees.

> how much their pay was affected

Bringing pay into this discussion is a nonsensical distraction. If an employer misses two consecutive paychecks by even 1%, that's enough reason to stop showing up for work, and potentially to sue for severance+damages, and also claim unemployment wages. There is no room for any variation here.

> Truly having ownership

It should be obvious that ownership here refers to the ownership of the technical direction, not literal ownership in the way I own a backpack that I bring to work. If true financial ownership existed, the employee would be receiving substantial equity with a real tradable market value, with the risk of losing some of this equity if they were to lose their job.

> how would disagreements between team members be worked out

As noted, there would be just two employees per project, and this ought to minimize disagreements. If disagreements still exist, this is where management can assist with direction. There should always remain room for conducting diverse experiments without having to worry about which outcomes get discarded and which get used.

---

In summary, if the suggested approach is not working, it's probably because there is significant unavoidable technical debt or the employees are mismatched to the task.



> Not making an ethical/moral judgement here, just a practical one - is there any reason to believe that giving employees ownership of the projects will be any better than having "management" own it if all factors were truly considered ?

It's not either-or, the ownership is shared. As responsibility goes, the buck ultimately stops with management, but when the people in the trenches can make more of their own decisions, they'll take more pride in their work and invest accordingly in quality. Of course some managers become entirely superfluous when a team self-manages to this extent, and will fight tooth and nail to defend their fiefdom. Can't blame them, it's perfectly rational to try to keep one's job.

As for tying the quality to pay in such an immediate way, I guess it depends on who's measuring what and why. Something about metrics becoming meaningless when made into a target, I believe it's called Cunningham's Law. I have big doubts as to whether it could work effectively in any large corpo shop, they're just not built for bottom-up organization.



Been all of an engineer, a manager, and a founder/CEO, and I enjoy analyzing organizational dysfunction.

The difference between an engineer and a manager's perspective usually comes down to their job description. An engineer is hired to get the engineering right; the reason the company pays them is for their ability to marry reality to organizational goals. The reason the company hires a manager is to set those organizational goals and ensure that everybody is marching toward them. This split is explicit for a reason: it ensures that when disagreements arise, they are explicitly negotiated. Most people are bad at making complex tradeoffs, and when they have to do so, their execution velocity suffers. Indeed, the job description for someone who is hired to make complex tradeoffs is called "executive", and they purposefully have to do no real work so that their decision-making functions only in terms of cost estimates that management bubbles up, not the personal pain that will result from those decisions.

Dysfunction arises from a few major sources:

1. There's a power imbalance between management and engineering. An engineer usually only has one project; if it fails, it often means their job, even if the outcome reality dictates is that it should fail. That gives them a strong incentive to send good news up the chain even if the project is going to fail. Good management gets around this by never penalizing bad news or good-faith project failure, but good management is actually really counterintuitive, because your natural reaction is to react to negative news with negative emotions.

2. Information is lost with every explicit communication up the chain. The information an engineer provides to management is a summary of the actual state of reality; if they passed along everything, it'd require that management become an engineer. Likewise recursively along the management chain. It's not always possible to predict which information is critical to an executive's decision, and so sometimes this gets lost as the management chain plays telephone.

3. Executives and policy-makers, by definition, are the least reality-informed people in the system, but they have the final say on all the decisions. They naturally tend to overweight the things that they are informed on, like "Will we lose the contract?" or "Will we miss earnings this quarter?"

All that said, the fact that most companies have a corporate hierarchy and they largely outcompete employee-owned or founder-owned cooperatives in the marketplace tends to suggest that even with the pitfalls, this is a more efficient system. The velocity penalty from having to both make the complex decisions and execute on them outweighs all the information loss. I experienced this with my startup: the failure mode was that I'd emotionally second-guess my executive decisions, which meant that I executed slowly on them, which meant that I didn't get enough iterations or enough feedback from the market to find product/market fit. This is also why startups that do succeed tend to be ones where the idea is obvious (to the founder at least, but not necessarily to the general public). They don't need to spend much time on complex positioning decisions, and can spend that time executing, and then eventually grow the company within the niche they know well.



> All that said, the fact that most companies have a corporate hierarchy and they largely outcompete employee-owned or founder-owned cooperatives in the marketplace tends to suggest that even with the pitfalls, this is a more efficient system.

This conclusion seems nonsensical. The assumption that what's popular in thearket is popular because it's effective has only limited basis in reality. Heirarchical structures appear because power is naturally consolidating and most people have an extreme unwillingness to release power even when presented with evidence that it would improve their quality of life. It is true that employee owned companies are less effective at extracting wealth from the economy, but in my experience working for both traditional and employee owned companies, the reason is employees care more deeply about the cause. They tend to be much more efficient at providing value to the customer and paying employees better. The only people who lose out are the executives themselves which is why employee owned companies only exist when run by leaders with passion for creating value over collecting money. And that's just a rare breed.



You've touched on the reason why hierarchical corporations outcompete employee-owned-cooperatives:

> Hierarchical structures appear because power is naturally consolidating and most people have an extreme unwillingness to release power even when presented with evidence that it would improve their quality of life.

Yes, and that is a fact of human nature. Moreover, many people are happy to work in a power structure if it means that they get more money to have more power over their own life than they otherwise would. The employees are all consenting actors here too: they have the option of quitting and going to an employee-owned cooperative, but most do not, because they make a lot more money in the corporate giant. (If they did all go to the employee-owned cooperative, it would drive down wages even further, since there is a finite amount of dollars coming into their market but that would be split across more employees.)

Remember the yardstick here. Capitalism optimizes for quantity of dollars transacted. The only quality that counts is the baseline quality needed to make the transaction happen. It's probably true that people who care about the cause deliver better service - but most customers don't care enough about the service or the cause for this to translate into more dollars.

As an employee and customer, you're also free to set your own value system. And most people are happier in work that is mission- & values-aligned; my wife has certainly made that tradeoff, and at various times in my life, I have too. But there's a financial penalty for it, because lots of people want to work in places that are mission-aligned but there's only a limited amount of dollars flowing into that work, so competition for those positions drives down wages.



> most customers don't care enough about the service or the cause for this to translate into more dollars.

This is an important point as it reinforces the hierarchical structure. In an economy composed of these hierarchies, a customer is often themselves buying in service of another hierarchy and will not themselves be the end user. This reduces the demand for mission-focused work in the economy, instead reinforcing the predominance of profit-focused hierarchies.



There is a Chinese saying you can conquer a kingdom on horseback but you cannot rule it on horseback. What that means is, yes, entrepreneurial velocity and time to market predominate in startups. But if they don’t implement governance and due process, they will eventually lose what market share they gained. Left uncontrolled, internal factions and self serving behavior destroys all organisations from within.



This is a wonderful summary, very informative. Thank you. Is there a book or other source you’d recommend on the subject of organizational roles and/or dysfunction?…ideally one written with similar clarity.

One thing stood out to me:

You note that executives are the least reality-informed and are insulated from having their decisions affect personal pain. While somewhat obvious, it also seems counterintuitive in light of the usual pay structure of these hierarchies and the usual rationale for that structure. That is, they are nearly always the highest paid actors and usually have the most to gain from company success; the reasoning often being that the pay compensates for the stress of, criticality of, or experience required for their roles. Judgments aside and ignoring the role of power (which is not at all insignificant, as already mentioned by a sibling commenter), how would you account for this?



Most of these organizational theories I've developed myself from observing how actual corporate hierarchies function and trying to put myself (and sometimes actually doing it!) in each of the different roles and think about how I would act with those incentives. I did have a good grounding of Drucker and other business books early in my career, and two blog series' that have influenced my thinking are a16z's "Ones and Twos" [1] and Ribbonfarm's "Gervais principle" [2].

For executive pay, the most crucial factor is the desire to align interests between shareholders and top executive management. The whole point of having someone else manage your company is so that you don't have to think about it; this only works when the CEO, on their own initiative, will take actions that benefit you. The natural inclination of most people (and certainly most people with enough EQ to lead others) is to be loyal to the people you work with; these are the folks you see day in and day out, and your power base besides. So boards need to pay enough to make the CEO loyal to their stock package rather than the people they work with, so that when it comes time to make tough decisions like layoffs or reorgs or exec departures, they prioritize the shareholders over the people they work with.

This is also why exec packages are weighted so heavily toward stock. Most CEOs don't actually make a huge salary; median cash compensation for a CEO is about $250K [3], less than a line manager at a FANG. Median total comp is $2M (and it goes up rapidly for bigger companies), so CEOs make ~90%+ of their comp in stock, again to align incentives with shareholders.

And it's why exec searches are so difficult, and why not just anyone can fill the role (which again serves to keep compensation high). The board is looking for someone whose natural personality, values, and worldview exemplifies what the company needs right now, so that they just naturally do what the board (and shareholders) want. After all, the whole point is that the board does not want to manage the CEO; that is why you have a CEO.

There are some secondary considerations as well, like:

1.) It's good for executives to be financially independent, because you don't want fear of being unable to put food on the table to cloud their judgment. Same reason that founder cash-outs exist. If the right move for a CEO is to eliminate their position and put themselves out of a job, they should do it - but they usually control information flow to the board, so it's not always clear that a board will be able to fire them if that's the case. This is not as important for a line worker since if the right move is to eliminate their position and put themselves out of a job, there's an executive somewhere to lay them off.

2.) There's often a risk-compensation premium in an exec's demands, because you get thrown out of a job oftentimes because of things entirely beyond your control, and it can take a long time to find an equivalent exec position (very few execs get hired, after all), and if you're in a big company your reputation might be shot after a few quarters of poor business performance. Same reason why execs are often offered garden leave to find their next position after being removed from their exec role (among others like preventing theft of trade secrets and avoiding public spats between parties). So if you're smart and aren't already financially independent, you'll negotiate a package to make yourself financially independent once your stocks vest.

3.) Execs very often get their demands met, because of the earlier point about exec searches being very difficult and boards looking for the unicorn who naturally does what the organization needs. Once you find a suitable candidate, you don't want to fail to get them because you didn't offer enough, so boards tend to err on the side of paying too much rather than too little.

Another thing to note is that execs may seem overpaid relative to labor, but they are not overpaid relative to owners. A top-notch hired CEO like Andy Grove got about 1-1.5% of Intel as his compensation; meanwhile, Bob Noyce and Gordon Moore got double-digit percentages, for doing a lot less work. Sundar Pichai gets $226M/year, but relative to Alphabet's market cap, this is only 0.01%. Meanwhile, Larry Page and Sergey Brin each own about 10%. PG&E's CEO makes about $17M/year, but this is only 0.03% of the company's market cap.

There's a whole other essay to write about why owners might prefer to pay a CEO more to cut worker's wages vs. just pay the workers more, but it can basically be summed up as "there's one CEO and tens of thousands of workers, so any money you pay the CEO is dwarfed by any delta in compensation changes to the average worker. Get the CEO to cut wages and he will have saved many multiples his comp package."

[1] https://a16z.com/ones-and-twos/

[2] https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...

[3] https://chiefexecutive.net/wp-content/uploads/2014/08/CEO_Co...



What I see is a movement where line employees have a say on who is retained at the director and VP level.

The CEO reports to the board. But his immediate and second tier reports are also judged by the employees. The thought is that will give them pause before they embark on their next my way or the highway decision making. The most egregious directors who push out line employees in favor of their cronies will be fired under this evaluation.



> If employees had ownership over which changes they think are best, a good employee would act on bringing the alerts back to zero before they take on new features or a new project.

You say this but as someone who's run a large platform organization that hasn't been my experience. Sure some employees, maybe you, care about things like bringing alerts back to zero but a large number are indifferent and a small number are outright dismissive.

This is informed not just by individual personality but also by culture.

Not too long ago I pointed out a bug in someone's code who I was reviewing and instead of fixing it they said, "Oh okay, I'll look out for bugs like that when I write code in the future" then proceeded to merge and deploy their unchanged code. And in that case I'm their manager not a peer or someone from another team, they have all the incentive in the world to stop and fix the problem. It was purely a cultural thing where in their mind their code worked 'good enough' so why not deploy it and just take the feedback as something that could be done better next time.



With regard to alerts, I have written software that daytrades stocks, making a lot of trades over a lot of stocks. Let me assure you that not a single alert goes ignored, and if someone said it's okay to ignore said alerts, or to have persistent alerts that require no action, they would be losing money because in time, they will inevitably ignore a critical error. I stand by my claim that it's what sets apart good employees from those that don't care if the business lives or dies. I think a role of management is to ensure that employees understand the potential consequences to the business of the code being wrong.



Yes, there was a recent story about (yet another) Citi "fat finger" trade. The headlines mentioned things like "the trader ignored 700 error messages to put in the trade", but listening to a podcast about it.. its more like awful systems that are always half broken is what ultimately lead to it.

The real punchline was this - the trader confused a field for entering shares quantity for notional quantity, but due to some European markets being closed, the system had a weird fallback logic that it sets the value of shares to $1, so the confirmation back to the trader was.. the correct number of dollars he expected.

So awful system designs lead to useless and numerous alerts, false confirmations, and ultimately huge errors.



> If employees had ownership over which changes they think are best, a good employee would act on bringing the alerts back to zero before they take on new features or a new project

That requires that you have good employees, which can be as rare as good management.



The more pernicious form of this, in my experience, are ignored compiler/linter/test warnings. Many codebases have a tremendous number of these warnings, devs learn to ignore them, and this important signal of code quality is effectively lost.



It's almost always worth spending the time to either fix all warnings or, after determining it's a false positive, suppressing it with a #pragma.

Once things are relatively clean, it's easy to see if new code/changes trip a warning. Often unexpected warnings are a sign of subtle bugs or at least use of undefined behaviors. Sorting those out when they come up is a heck of a lot easier than tracing a bug report back to the same warming.



What we should remember about Al McDonald [is] he would often stress his laws of the seven R's," Maier says. "It was always, always do the right thing for the right reason at the right time with the right people. [And] you will have no regrets for the rest of your life.



Even following all that could have led to Challenger exploding (stochastic process with non-zero probability of a terminal failure), and leaving everyone with "What did we do wrong?" without any answer and full of regrets for the rest of their lives.



"Truth, Lies, and O-Rings" is a fascinating (if sometimes tedious) book that should be at the top of any reading list for those interested in the Challenger disaster.

For me one of the more interesting side-bar discussions are those around deciding to use horizontal testing of the boosters despite that not being an operational configuration. This resulted in flexing of the joints that was not at all similar to the flight configuration and hindered identification of the weaknesses of the original "field joint" design.



Interestingly, we're still testing SLS SRBs[1] horizontally.

https://www.youtube.com/watch?v=n-wqAbVqZyg

---

1. In case anyone doesn't know, they use the actual recovered Shuttle casings on SLS, but use an extra "middle" section to make it 5 sections in length instead of the Shuttle's 4 sections. In the future they'll move to "BOLE" boosters which won't use previously flown Shuttle parts.



I think the booster was redesigned after the accident, I guess/hope the opportunity was seized to make a design that would be less sensitive to orientation.



> Are you saying that they are tested horizontally or that they are ONLY tested horizontally?

My understanding is that they are only hot fired horizontally.

Presumably there are many tests done at the component level, although it's questionable whether it makes sense to call those tests horizontal or vertical at that point.



It's a shame we don't have more engineers today that refuse to invent things because so many technological inventions today are being used to further the destruction of our planet through consumerism.

Sadly, human society has a blind spot when it comes to inventions with short-term benefits but long-term detriments.

I would love to see more programmers refusing to work on AI.



> I would love to see more programmers refusing to work on AI.

Refusing to work on something is not newsworthy. I refuse to work on (or use) AI, ads and defence projects, and I'm far from being the only one.

Though let who is free of sin throw the first stone, I now stand on a high horse after having worked in the gambling sector, and now ashamed of it, so I prefer to focus the projects themselves rather than the people and what they choose to do for a living.



> Refusing to work on something is not newsworthy.

One person, no. A hundred, who knows. Ten thousand programmers united together not to work on something? Now we're getting somewhere. A hundred thousand? Newsworthy.



I would bet there are a hundred thousand people refusing to work in war, ai, ads, gambling, crypto etc. I certainly am. But all it means is that pay goes up and quality of engineering goes down a little in those sectors, but not much more.



The issue is quantifying this sentiment. How would you even identify programmers who are doing this? Yet another reason why software engineers really ought to organize their labor like a lot of other disciplines of engineering have done decades ago. Collective action like this would be more easily mustered, advertised, and used to influence outcomes if labor were merely organized and informed of itself.



I also refuse to work on the war machine, blockchain, or gambling.

Unfortunately it looks like that might also be refusing to eat right now. We'll see how much longer my principles can hold out. Being gaslit into an unjustified termination has me in a cynical kind of mood anyway. Doing a little damage might be cathartic.



I’ve been gaslit, I ended up walking away from my company. It was extremely painful.

> Doing a little damage might be cathartic.

Please avoid the regret. Do something kind instead. Take the high road. Take care of yourself.



Regret right now would be letting the stress of unemployment rip my family apart. I've got maybe a handful of door-slamming "what the fuck did you do all day then?" rants that I can tolerate before I'm ready to sign on with Blockchain LLM O-Ring Validation as a Service LLC: We Always Return True!™ if it'll pay the bills and get my wife to stop freaking out.



It probably doesn't help right now, but you should know you are not the only one in your situation. Perhaps it might help to write down your actual principles. Then compare that list with the real reasons you refuse some employment opportunities.

I think you have already listed one big reason that isn't a high-minded principle. You want to make money. There may be others.

It's always wonderful when you can make a lot of money doing things you love to do. It stinks when you have to choose between what you are exceptionally good at doing and what your principles allow.

If only somebody could figure out how the talents of all the people in your situation could be used to restore housing affordability. Would you take a 70% paycut and move to Nebraska if it allowed you to keep all your other principles?

As you say, kindness isn't hiring. I'd love to see an HN discussion of all the good causes that need founders. It would be wonderful to have some well known efforts where the underemployed could devote some energy while they licked their wounds. It might even be useful to have "Goodworks Volunteer" fill that gap in employment history on your resume.

How do we get a monthly "What good causes need volunteers?" post on HN?



> It probably doesn't help right now, but you should know you are not the only one in your situation.

You're right, it doesn't. It feels more like an attempt to minimize. The rest was you spitballing some unrelated idea.



And this is how all unjust systems sustain themselves. You WILL participate in the injustice, or be punished SEVERELY. Why do the people doing the punishing want to punish you? Because they WILL participate in punishing, or be punished SEVERELY.

People have wondered how so many people ever participated in any historical atrocity. This same mechanism is used for all of them.



Avoiding the use of AI is just going to get you lapped.

There’s no benefit to your ideological goals in kneecapping yourself.

There’s nothing morally wrong with using or building AI, or gambling.



There's a lot baked into that thought, but I wanted to extract this part:

> There’s nothing morally wrong with ... building... gambling.

Say you're building a gambling system and building that system well. What does that mean? More people use it? Those people access it more? Access it faster? Gamble more? Gamble faster?

It creates and feeds addiction.



I agree with you. It's also worth noting that this isn't unique to anything discussed here. EVERYONE has their line in the sand on a huge array of issues, and that line falls differently for a lot of people.

Environment, religion, war, medicine; everything has a personal line associated with it.



Lots of things create and feed addictions, including baking cookies.

Let’s not confuse the issue. Just because you find something distasteful doesn’t mean it’s bad or morally problematic.



1) I question how much choice an addict has.

2) If you were devising more efficient sugar delivery systems for those acquaintances as a means to take every last cent they had, knowing they'd be unable to resist, you're complicit in robbing and killing them.



Wake me up when AI is able to compete with a software engineer with almost two decades in the field.

Hint: most of my consulting rate is not about writing fizzbuzz. Some clients pay me without even having to write a single line of code.



I am curious why you avoid ads - personally I view them as a tremendous good for the world, helping people improve their lives by introducing them to products or even just ideas they didn't know existed.



I tend to view ads as the perfect opposite of what you mentioned; it’s an enormous waste of money and resources on a global scale that provides no tangible benefit for anyone that isn’t easily and cheaply replaced by vastly superior options.

If people valued ad viewing (e.g. for product decisions), we’d have popular websites dedicated to ad viewing. What we have instead is an industry dedicated to the idea of forcefully displaying ads to users in the least convenient places possible, and we still all go to reddit to decide what to buy.



> If people valued ad viewing (e.g. for product decisions), we’d have popular websites dedicated to ad viewing.

There was a site dedicated to ad viewing once (adcritic.com maybe?) and it was great! People just viewed, voted, and commented on ads. Even though it was about the entertainment/artistic value of advertising and not about making product decisions.

Although the situation is likely to change somewhat in the near future, advertising has been one of the few ways that many artists have been able to make a comfortable living. Lying to and manipulating people in order to take more of their money or influence their opinions isn't exactly honorable work, but it has resulted in a lot of art that would not have happened otherwise.

Sadly the website was plagued by legal complaints from extremely shortsighted companies who should have been delighted to see their ads reach more people, and it eventually was forced to shutdown after it got too expensive to run (streaming video in those days was rare, low quality, and costly) although I have to wonder how much of that came from poor choices (like paying for insanely expensive superbowl ads). The website was bought up and came back requiring a subscription at which point I stopped paying any attention to it.



We do have such sites though, like Tom's Hardware or Consumer Reports or Wirecutter or what have you. Consumers pay money for these ads to reduce the conflict of interest, but companies still need to get their products chosen for these review pipelines.



Tom's Hardware and Consumer Reports aren't really about ads (or at least that's not what made them popular). they were about trying to determine the truth about products and see past the lies told about them by advertising.



Strictly speaking, isn't advertising any action that calls attention to a particular product over another? It doesn't have to be directly funded by a manufacturer or a distributor.

I'd consider word-of-mouth a type of advertising as well.



To me advertising isn't just calling attention to something, it's doing so with the intent to sell something or to manipulate.

When it's totally organic the person doing the promotion doesn't stand to gain anything. It less about trying to get you to buy something and usually just people sharing what they enjoy/has worked for them, or what they think you'd enjoy/would work for you. It's the intent behind the promotion and who is intended to benefit from it that makes the difference between friendly/helpful promotion and adversarial/harmful promotion.

Word of mouth can be a form of advertising that is directly funded by a manufacturer or a distributor too though. Social media influencers are one example, but companies will pay people to pretend to casually/organically talk up their products/services to strangers at bars/nightclubs, conferences, events, etc. just to take advantage of the increased level trust we put in word of mouth promotion exactly because of the assumption that the intent is to be helpful vs to sell.



Back when I was a professor I would give a lecture on ethical design near the end of the intro course. In my experience, most people who think critically about ethics eventually arrive at their own personal ethics which are rarely uniform.

For example, many years ago I worked on military AI for my country. I eventually decided I couldn't square that with my ethics and left. But I consider advertising to be (often non-consensual) mind control designed to keep consumers in a state of perpetual desire and I'd sooner go back to building military AI than work for an advertising company, no matter how many brilliant engineers work there.



To me, ads are primarily a way to extract more value from ad-viewers by stochastically manipulating their behavior.

There is a lot of support in favor. Consider:

- Ads are typically NOT consumed enthusiastically or even sought out (which would be the cases if they were strongly mutually beneficial). There are such cases but they are a very small minority.

- If product introduction was the primary purpose, then repeatedly bombarding people with well-known brands would not make sense. But that is exactly what is being done (and paid for!) the most. Coca Cola does not pay for you to learn that they produce softdrinks. They pay for ads to shift your spending/consumption habits.

- Ads are an inherently flawed and biased way to learn about products, because there is no incentive whatsoever to inform you of flaws, or even to represent price/quality tradeoffs honestly.



Products (and particularly ideas) can be explored in a pull pattern too. Pushing things—physical items, concepts of identity, or political ideology—in the fashion endemic to the ad industry is a pretty surefire way to end up with an extremely bland society, or one that segments increasingly depending on targeting profile.



>I am curious why you avoid ads - personally I view them as a tremendous good for the world, helping people improve their lives by introducing them to products or even just ideas they didn't know existed.

I would agree with you if ads were just that. Here's our product, here's what it does, here's what it costs. Unfortunately ads sell the sizzle not the steak. That has been advertising mantra for probably 100 years.

https://www.youtube.com/watch?v=UW6HmQ1QVMw



If all the programmers working on advertising and tracking and fingerprinting and dark pattern psychology were to move into the field of AI I think that would be a big win.

And that's not saying that AI is going to be great or even good or even overly positive, it's just streets ahead of the alternatives I mentioned.



Is it miles ahead? An engine that ingests a ridiculous amount of data to produce influence? Isn't that just advertising but more efficient and with even less accountability?



I'll reply here since your comment was first.

AI has the potential to go in many directions, at least some of which could be societally 'good'.

Advertising is, has always been, and likely always will be, societally 'bad'.

This differentiation, if nothing else.

(Yes, my opinion on advertising is militantly one sided. I'm unlikely to be convinced otherwise, but happy for, and will read, contrary commentary).



I don't think it's advertising that's inherently evil. Like government, it's a good thing, even a needed thing. People need laws and courts, and buyers and sellers need to be able to connect.

It turns evil in the presence of corruption. Taking bribes in exchange for power. Government should never make rules for money, but for the good of the people. And advertising should never offer exposure for sale - exposure should only result from merit.

Build an advertising system with integrity - in which truthful and useful ads are not just a minimum requirement but an honest aspiration and the only way to the top of the heap. Build an advertising system focused, not on exploiting the viewer, but on serving them - connecting them with goods and services and ideas and people and experiences that are wanted and that promote their health and thriving.

I won't work on advertising as it's currently understood... I agree it's evil. But I'd work on that, and I think it would be a great good.



I used to think there were useful ads. But really, even a useful add is an unsolicited derailing of your thoughtspace. You might need a hammer, but did you really have to think about it right then? I think back to how my parents and grandparents got their goods before the internet. If they needed something they went to the store. If they were interested in new stuff that might be useful thats coming out, they'd get a product catalog from some store mailed to them. Is a product catalog an ad? Maybe, depending on how you argue the semantics, but its much more of a situation like going to a restaurant and browsing the menu and choosing best for yourself, vs being shown a picture of a big mac on a billboard every time you leave your home.



Only in a sense that computers are all those things on steroids. It's a low-level tech that can be used for many different things. Given the incentives in our socioeconomic system, it will be used for the things that you have listed, just as everything else.



AI is the anti printing press. Done well, it removes the ability t read something written by someone far away, because it erodes any ability to trust that someone exists, or to find that persons ideas amongst the remixed nonideas AI churns out.

Advertising is similar, of course, and the only thing that has kept the internet working as a communications medium in spite of advertising is that it was generally labeled, constrained, enclosed, spam-filtered, etc.

The AI of today is being applied to help advertising escape those shackles, and in doing so, harm the ability to communicate.



If only it were that easy.

A lot of engineers in the US who are both right out of school and are on visas need to find and keep work within a couple months of graduation and can’t be picky with their job or risk getting deported.

We have a fair number of indentured programmers.



I will never forget the grumpy look on the face of a imperial tobacco representative on a job fair in my university years ago. No one was visiting their booth for anything except for silly questions about benefit package including cigarettes.



Sadly it's not enough for 99% of engineers to refuse to work on an unethical technology, or even 99.99%

Personally I don't work on advertising/tracking, anything highly polluting, weapons technology, high-interest loans, scams and scam-adjacent tech, and so on.

But there are enough engineers without such concerns to keep the snooping firms, the missile firms, and the payday loan firms in business.



One issue we have is that economic pressures underly everything, including ethics. Ethics are often malleable depending on what someone needs to survive and given different situations with resource constraints, people are ultimately more willing to bend ethics.

Now, there’s often limits to some flexibility and lines some simply will not cross, but survival and self preservation tends to take precedent and push those limits. E.g., I can’t imagine ever resorting to cannibalism but Flight 571 with the passengers stranded in the Andes makes a good case for me bending that line. I’d be a lot more willing to work for some scam or in high interest loans for example before resorting to cannibalism to feed myself and I think most people would.

If we assure basic survival at a reasonable level, you might find far less engineers willing to work in any of these spaces. It boils down to what alternatives they have and just how firm they are on some ethical line in the sand. We’d pretty much improve the world all around I’d say. Our economic system doesn’t want that though, it wants to be able to apply this level of pressure on people and so do those who are highly successful who leverage their wealth as power. As such I don’t see how that will ever change, you’ll always have someone doing terrible things depending on who is the most desperate.



There are even engineers with such concerns working in these firms. They might figure that the missile is getting built no matter if they work there or not, so they might as well take the job offer.



The curse of technology is that it is neither good nor bad. Only in the way it is used t becomes one or the other.

>I would love to see more programmers refusing to work on AI.

That is just ridiculous. Modern neural networks are obviously an extremely useful tool.



As others have said, a big part of the problem is the need to eat.

I have a family. I work for a company that does stuff for the government.

I'd _rather_ be building and working on my cycling training app all day every day, but that doesn't make me any money, and probably never will.

All the majority of us can hope for is to build something that helps people and society, and hope that does enough good to counteract the morally grey in this world.

Nothing is ever black and white.



The problem is that for every one that refuses, there's at least one that will. So standing on principles only works if the rest of the rungs of the ladder above you also have those same principles. If anywhere in the org above you does not, you will be overruled/replaced.



I would wish lot more programmers refuse to work with surveillance and add tech... But nearly every site has that stuff on them... Goes to tell what are the principles of profession or in general...



I no longer work as a software developer because I feel that technology is ruining normal human interactions by substituting them in incomplete ways and making everyone depressed.

I think we'd be better off making things for each other and being present and local rather than trying to hyperstimulate ourselves into oblivion.

I'm just some dude though. It's not making it to the headlines.



> I'm just some dude though. It's not making it to the headlines.

Doesn't have to be on headlines. Even just hearing that gives me a bit more energy to fight actively against the post-useful developments of modern society. Every little bit helps.



> I would love to see more programmers refusing to work on AI.

This is not effective.

Having a regulated profession that is held to some standards, like accountants, would actually work

Without unions and without a professional body individual action won’t be achieving anything



So do you think that people should be required to become members of a "regulated profession" before writing a VBA spreadsheet macro, or contributing to an open-source project?



Are you required to become a chartered civil engineer to build a house for your dog?

But the software developer who’se code handles personal information of 10 million million people should know that you don’t store them in plain text, which developers and business leaders at Virgin Media did not know, and if you click ‘forgot password’ they would send you a letter with you password In The Mail

联系我们 contact @ memedata.com