I’ve long debated with myself – and occasionally others – the morality of certain jobs in modern America.

Many companies make money – at least in part – by:

  • exploiting people (e.g., unionization suppression tactics to keep wages low; shitcoins)
  • manipulating people (e.g., hacking people’s emotions so they lust for products they don’t need)
  • preying on people’s desperation (e.g., payday loans)
  • preying on people’s ignorance/trust (e.g., Bernie Madoff)
  • bait-and-switching (e.g., jacking up prices once someone has subscribed)
  • selling products and services that harm buyers (e.g., ultra-processed foods)

Is selling a harmful product to someone who wants it morally justifiable?

This last category is morally quite interesting.

Many seem to feel that – because America’s “a free country” (or was until 2025!?!?) – if someone WANTS to buy a product, it’s morally justifiable to sell it to them.

Is it okay if the company knows the product will harm customers?

Is it okay if customers know the product will harm them?

What if the harmful product will harm others?

Is it okay if both the company and customers know the product will harm – perhaps even kill – thousands of innocent third parties?

For example:

Each of these 2.5 million non-smokers who died from secondhand smoke, each of these 800,000+ opioid victims, and each of the nearly 50,000 gunshot victims annually had friends and families who loved them and were devastated by their illness/addiction/suffering and their death.

(In modern America, Congress and our courts have long granted virtually complete immunity to gun manufacturers, absolving them of legal culpability for the deaths and injuries their weapons cause: “The Protection of Lawful Commerce in Arms Act prevents the gun industry from being held accountable for harm caused and disincentivizes the industry from ensuring consumer safety”. Recently, Mexico sued US firearms manufacturers for $10 billion in damages inflicted by drug cartel gun violence ripping Mexico apart, made possible by American firearms manufacturers, but the Supreme Court just tossed their lawsuit.)

For decades, tobacco companies knew cigarettes caused cancer and hid and lied about the evidence. (I highly recommend watching “The Insider”. Wonderful movie about brave whistleblower Jeffrey Wigand, based on this 1996 Vanity Fair article.)

For decades, oil companies have known that burning fossil fuels was causing global warming, and they hid the evidence and paid shills to muddy the scientific record. See: “Exxon disputed climate findings for years. Its scientists knew better”, Harvard Gazette & “Exxon Knew about Climate Change almost 40 years ago,” Scientific American, 2015

Exxon oil company has known since the late 1970s that its fossil fuel products could lead to global warming with “dramatic environmental effects before the year 2050.” Additional documents then emerged showing that the US oil and gas industry’s largest trade association had likewise known since at least the 1950s, as had the coal industry since at least the 1960s, and electric utilities, Total oil company, and GM and Ford motor companies since at least the 1970s.

“Assessing ExxonMobil’s global warming projections”, Science, 2023

My moral dilemma

Not long after the US Supreme Court legalized sports betting in 2018, a healthcare startup I had been working at suddenly laid off about 93% of its staff, surprising us all. (The CFO told me only she and the CEO had known the firm’s true, dire financial situation and the CEO had misled us all and ordered her not to tell anyone.)

I needed a job quickly and loved programming in Elixir, but there were only a handful of firms hiring for my favorite niche language at that time, and my heretofore Elixir developer colleagues were themselves similarly suddenly unemployed.

I interviewed at a giant soda and junk food conglomerate and told every interviewer that I was uncomfortable about selling junk food and soda and asked for their thoughts, hoping to hear something that would assuage my conscience. I didn’t and, shockingly, I didn’t receive an offer.

I also interviewed at an exciting startup that was building tech to enable live, in-game betting on the outcome of the next football drive, the next at-bat, the next soccer goal scorer, etc. I loved my prospective future colleagues. I loved the firm’s potential. The product was exciting. I loved their tech stack. I was thrilled by the technical challenges. And they made me an extremely generous offer to join.

Despite my excitement, after searching my soul, though I WANTED to accept, I just couldn’t.

I tried to rationalize taking the job…

  • Rationalization #1: I knew MANY customers love the thrill of making small bets on sports. I had even seen it in my own family. They found betting very small stakes on sports enjoyable. The risk was minimal. And the small bets made watching the sporting events they bet on more exciting.

  • Rationalization #2: Just being a sports fan can be expensive, especially if you buy clothes and merch and attend games. How is losing a little money to sports gambling different from paying to attend a live game?

  • Rationalization #3: Someone else would take the job if I didn’t, so it’s not like I could kill the startup, let alone the entire emerging sports betting industry. Does it even matter whether I do the job or someone else does? (A similar argument/rationalization applies at the company level: Is it okay to sell a harmful product if some other company will sell it if we stop?)

  • Rationalization #4: When I raised my concern with the founder/CEO of my prospective employer, he assured me they were well aware of the problem and committed to preventing customers from losing “too much.” If true, that’s admirable. But I could only guess as to how effective such efforts might prove, especially because how much money a gambler can afford to lose is so circumstance-specific.

Despite my desire to rationalize taking the job, I couldn’t stop obsessing over potential problem gamblers who might become addicted and lose large sums due to my code making it easy and fun to gamble on live sports from their phone, any time and anywhere. Even though most users would probably gamble responsibly and many would derive net benefit, many of the gambling addicts could have their lives RUINED… and maybe even ruin the lives of their spouses, children, and loved ones. I didn’t want to contribute to that.

Luckily, I received two other generous job offers that were less sexy but about which I didn’t feel morally conflicted. Not everyone has the opportunity/privilege of weighing moral concerns into their choice of workplace.

We now know the horrible toll of sports gambling addiction

When I made my decision, sports gambling had only briefly been legal outside of select locations, like Las Vegas casinos, so my moral concerns were theoretical, not tangible.

That’s no longer true.

I decided to blog about this because I recently watched two powerful segments on the harms of sports gambling. First is this “Daily Show” interview of Jonathan D. Cohen, author of the new book on the harms of sports gambling, “Losing Big”:

The second is this episode of John Oliver’s “Last Week Tonight”:

I’m relieved I listened to my conscience and turned down an otherwise super-exciting job opportunity.

Postscript: Is it okay to sell a product with a significant risk of death? What if it could wipe out all of humanity?

I tonight watched the documentary Titan: The OceanGate Disaster and was horrified by the completely unacceptable risks founder and CEO Stockton Rush took with his life, his employees’ lives, and his customers’ lives. He pretty literally played Russian roulette with their lives. He also was savage and cruel toward the various caring, reasonable, worried employees who spoke up about totally valid engineering concerns. You can learn more here: Two Years After Titan Imploded, Here’s the Latest on the OceanGate Investigation

My mind immediately jumped to existential fears over AI.

OpenAI (sic) CEO Sam Altman just wrote “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence” and that we’re now starting to witness “recursive self-improvement,” which may enable exponential improvement over the next few years, after which AIs could outperform even expert human beings at virtually any task.

Many leading AI experts fear this is a potential existential threat to humanity. Many signed their names to a statement that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Some – perhaps most notably Nobel Prize winner Geoffrey Hinton – subsequently quit their work and expressed regret over having moved the field forward to such a dangerous point. I encourage you to watch some of his interviews:

And here’s another fearful “Godfather of AI,” Yoshua Bengio:

Nevertheless, many AI researchers continue running full-speed ahead, gambling ALL our lives on their intellectual obsession, just as Stockton Rush gambled with human lives at a far smaller scale in his relentless drive to achieve his macabre obsession of commercializing tourism to The Titanic. Rush killed five. Hopefully AI doesn’t kill billions.


With thanks to Amit Lahav for his photo shared on Unsplash