Avatar

hey hi buddy

@automatic-ally / automatic-ally.tumblr.com

Quantum chemistry Goldilocks. Milleniarian procrastinator. Want to do good. What's good, dogg?
Avatar

me: man, this sentence is really clunky. Why is writing so hard? also me: uh, probably because you keep saying shit like “conditional on” instead of “if”.

Avatar
reblogged

researching 17th century piracy tonight. came across this:

One popular pastime amongst pirates was the mock trial.  Each man played a part be it jailer, lawyer, judge, juror, or hangman.  This sham court arrested, tried, convicted, and “carried out” the sentence to the amusement of all. (x)

how widespread could this have really been? how would it have gotten passed from ship to ship? can you imagine a pirate crew at a tavern, bragging to another pirate crew about how good they are at playing pretend? why was their go-to game “legal system”? were they performing incisive satire? is this some sort of pirates-only inside joke that’s been lost to the ages?

update: the mock-charge in the mock-trial was piracy

they used to pretend to try each other for piracy

as a stress relief

Avatar
roachpatrol

ok but it’s got to have been a lot of fun to be the pirate defense lawyer, for the pirate accused of piracy, to attempt to argue to the pirate judge, in front of a jury of pirate peers, that your client could not possibly be a pirate

“Your honour, this man is no more a pirate than you or I! After all, who among us has not, at one time or another, boarded a ship and brutally murdered all aboard?”

Pirates were big on law! They had to be: when you’re a bunch of ruffians crammed like sardines in what’s effectively a giant powder keg, the need to punish transgressions becomes rather obvious. Plus, it’s rather embarrassing if your crew of go-it-alone scalawags hangs back and looks around awkwardly when faced with an armed merchant ship. Each pirate ship had its own code, but they generally included principles like democracy (each pirate had a vote, regardless of race), separation of powers (the traditional powers of a captain were split between the captain and quartermaster), constitutional rights (shares of booty were prescribed by the Code, as was the remit of the officers’ powers). They could even include a form of welfare or worker’s comp: on some ships, losing a leg in battle entitled you to $800 of loot. More on this in the Pirate Law chapter of David Friedman’s “Legal Systems Very Different from Ours”, available free here: http://www.daviddfriedman.com/Legal%20Systems/LegalSystemsContents.htm

Avatar
Avatar
5bi5

I want emo versions of idioms

Like, instead of ““you’re barking up the wrong tree” it’s “you’re panicking at the wrong disco”

Avatar
scanalan

You can lead a horse to Evanescence but you can’t bring him to life

In my culture we don’t say “I’m sad.” we say “my spirit’s sleeping somewhere cold” and I think that’s beautiful.

Avatar
reblogged

Here’s a post about Hard Problem of Consciousness, since @argumate​ and @foolishmeowing​ have talking about it lately:

I think it’s a mistake to view the Hard Problem as unique to materialism.  Idealism can’t answer it either, and generally doesn’t try to.  IMO, the problem is not really about matter, but about description or explanation.

(I also don’t think it’s unique to “formal” systems or approaches, except in a sense so broad that any philosophy that could ever be done is “formal,” because it involves strings of words and/or arguments.)

The Hard Problem is very similar to the problem of existence – “why is there something rather than nothing?”  Both of these are questions about what “animates” or “turns on” any given description – what makes a description (such as a formal system) more than “mere words on a page.”  This is a distinctive class of problem because any familiar kind of explanation would simply become part of the description, and thereby be subject to the exact same problem.

If you add some sort of “existence-maker” mechanism to your description of what exists, you’re still open to the objection that the entire description, existence-maker and all, could just as well be an inert logical structure, without the extra magic of existence.  This is a pretty familiar, standard point in the context of the existence question, but in discussions about consciousness, the analogous point tends to get buried under arguments about whether or not there is more of a problem for certain kinds of description – “material” or “formal” or “functional” ones, or whatever.

It seems to me that this is a problem for descriptions, period.  If you look at the various dualistic and idealistic systems that have been proposed, they tend to be, well … systems: descriptive accounts of what is supposed to exist (some or all of it mental/spiritual), along with some arguments about why we should assent to the description, but nothing inherent in them to light the flame and turn these descriptions necessarily into the realities they talk about.  These systems do claim that the flame is in fact lit, but they generally treat this as self-evident via Descartes’ cogito or similar.  At least one mind/spirit exists (by cogito), and here are some things it can conclude a priori about other existents – Leibniz’s various principles, McTaggart’s theory of determining correspondences, or whatever – and we’re off to the races.

These can be perfectly fine theories of what mind/spirit is, insofar as it exists, but they simply do not touch why/how it exists: you need the spark of a cogito to get things started, and the cogito doesn’t leave you any less in the dark about why there’s an existing mind (instead of there not being one).  It just convinces you that there is one.  And once you’ve decided to work within a frame where that is taken as given, you’ve given up on Hard Problems.  These theories only “explain” the ineluctable experiencey-ness of experience in the way that the observation “as a matter of fact, something exists” explains why there is something rather than nothing – which is to say, not at all.

It seems intuitively clear to me that these Hard Problems are unanswerable, because they ask for something that is incompatible with what we take to constitute an “answer” to a question.  They ask for an argument that some description is necessarily animated, that there’s no mystery about how it becomes more than words on a page because there is something impossible about the merely-words version of it.  But such an argument is either:

(1) An argument for purely logical necessity, i.e. necessity within the terms of the description, in which case the necessity property is just one more fact about the description and could be as “mere” as the rest, or

(2) An argument that the description gets necessarily lit up by the animating fire of something else that already has it, in which case we need some initial spark to start things up, one that is not explained within the terms of the description. Generally this spark is supposed to be “obvious” / a priori, but the fact that we have a priori knowledge of something doesn’t constitute an explanation of why we have that knowledge, so this doesn’t get the job done.

I feel like you’re selling (1) short? Presumably there, the fact that “patterns with properties A and B, when physically instantiated, experience consciousness” is a fact about patterns in general (and consciousness in general), not really a fact about a particular pattern.

(this is by way of analogy to eg “function f has multiple local minima” being a fact about f that’s readily derived (heh) from f and the definition of local minima, but is totally extraneous for describing f if you know its functional form. Do you think there are reasons that consciousness shouldn’t be a similar property?)

Avatar
reblogged

It’d be ridiculous not to sympathise with women in academia using credentials as an antidote to mansplaining. Like, your input isn’t taken seriously because you’re a woman, well, here’s an objectively verifiable thing that says that actually your input should be taken more seriously than some rando’s. It’s very useful as a tactic.

However, I wish the message was less “shut up and listen, cause I have an advanced degree” cause I think that’s upholding a hierarchy of a different kind, and one that is also to some degree shitty.

And it’s really not the case that people who don’t have advanced degrees in my subject should just shut up and listen to me. Maybe there is also a difference between more body-of-knowledge heavy fields like the humanities and less body-of-knowledge heavy fields like physics. But to be honest, even physics is pretty body-of-knowledge heavy so I dunno. 

what do you mean by body of knowledge heavy? cause physics seems really body of knowledge to me

Avatar
shieldfoss

Physics is much harder if you don’t know any of the things people learned before you, but you can re-derive the entire field from nothing in a cave if you’re comic-book super smart.

English Literature Criciticism is impossible if you don’t know any of the things people wrote before you. You can’t be smart enough to know what an author wrote 200 years ago.

Well, if you’re good enough at physics you could…

  • If you wish to copy
  • A library from naught
  • You must first
  • (shuffles deck)
  • (reads card)
  • …understand a timeless conception of physics that will let you create the four dimensional “universe crystal” containing everything that ever existed, then follow the angular sheer planes until you reach a frozen original of the library you wish to instantiate. A sufficiently skilled practitioner will not need to create a new universe crystal, but will instead realize they can just use the one they already exist in.

(Disclaimer, I might be arguing against something you’re not actually saying.)

Wait no, hold on, you can’t just, THINK physics up!

You’d need a LOT of specialized equipment to run experiments that will actually cause the differences between different theories to create measurable differences beyond the margin of error!

And you’d need to FIRST work out not only the theoretical background to UNDERSTAND THE QUESTION, but also to physically measure the different properties of the materials which you will need to compute the expected behavior for the different theories!

And ALSO the specific theoretical and mechanical capabilities to CREATE specific-patterned materials and procedures and experiments! (And measure them to ensure they actually are in the correct patterns, and…)

Like, e.g., the Double-Slit experiment (necessary for confirming Light Is A Wave, Electromagnetism Obeys Relativity, etc.) needs you to be able to construct materials with Very Fricking Small Slits in them. You can’t just… make that, with some rocks.

Most of the History Of Science was Necessary, Actually.

This is a pretty interesting question—what would a perfect reasoner need to reconstruct ~all of physics? I agree that “be in a cave with a box of scraps” probably doesn’t cover it (especially if you’re being distracted by gunfire, threats on your life, lovable fellow prisoners who will inevitably die to help you escape, etc.).

But like, imagine a graph with “percent of papers in field read” on the X axis and “percent of papers in field whose result a perfect reasoner could derive”* on the Y. I find it not-implausible that physics looks ~sigmoidal with a low threshold, and Eng Lit looks linear.

Like, information theory wise, physics facts are more compressible, and a Solomonoff inductor could therefore more quickly come to place high credence in [the program whose output is all the laws of physics]. Or whatever.

Avatar
reblogged

Also pretty much every time I say this I just get weird looks but one of the reasons I don’t find myself convinced by the usual arguments used to try to convince me to go vegan is how… nakedly utilitarian they are. They’re like, “animals feel pain and pleasure. So don’t kill them! Plants don’t! So you can eat those!”

But the thing that always gives me pause there is I think of those plants whose leaves close up when they are jostled or otherwise attacked. That may not be a nervous system similar to mine but it looks like nonconsent, and if the idea is that animals shouldn’t be killed and eaten because they don’t like it, then… what happens if (I think when) we discover that plants can be meaningfully said not to like it either?

Then if we’re utilitarians, the question goes from “don’t cause suffering” to “well, the plant’s suffering doesn’t count as much as you starving,” which… well, maybe, but that seems… weirdly off point to me somehow?

Also if “don’t cause suffering” is the moral maxim, doesn’t that Create problems for other moral dilemmas, like being pro-choice? If you’ve reached the point at which this fetus can suffer, and suffering is nonconsent, and nonconsent is wrong, I guess you’re stuck gestating?

The counter argument seems to be that plants grow fruit specifically to be edible. Maybe it’s pleasant to have bits of you ripped off and devoured, I don’t know.

But as an argument against something called “speciesism” it’s never seemed great to me. Are we acknowledging how little we understand about how plants work and whether or not they have experiences?

I’d guess plants’ internal structure for conducting large-scale behaviors is a lot simpler than most animals’ (no strong analogue to the nervous system). The range of behaviors they need to select from is certainly less complex. So I don’t think it’s crazy to assume they’re much less likely to be moral patients, and/or have much smaller moral weight. (I think this is less plausible for animals, and if animals are moral patients then they clearly have a lot of extremely bad experiences in factory farms.)

If the question is meat vs not-meat, it seems hard to compare the effect-on-plants, since obviously animals need a ton of plant matter. I believe in terms of total plant matter input, a pound of meat requires nontrivially more than a pound of not-meat? Ideally you’d have heuristics on diagnosing + aggregating plants’ pain, but “meat causes animals pain, and it’s not at all obviously better for plants” seems like a reasonable first-order conclusion.

If you’re curious about a panpsychist pro-vegan POV, Brian Tomasik is the person I’d recommend. He talks about plants here.

Okay, I read Brian’s discussion of plant suffering here. Basically:

* plants seem likely to have low moral value per organism, but there are so many of them that they might be significant in aggregate

* if you think of their moral value in terms of a ratio to animal or human value, you run into a weird “two-envelope problem” situation where the relative importance isn’t well-defined. Bummer. [1]

* like I said, vegetarianism kills fewer plants

* wild plant suffering would be the dominant concern. As with wild animal suffering, this suggests reducing biosphere.

* plants seem to have a slower “clock speed” (reaction time) than most animals. Another reason to downweigh their value.

And of course some fun Tomasikian hypothesizing:

“There could be extremely minor areas of practical significance. For instance, I wonder if seed sprouts cause more harm than eating bigger plants because you kill so many inchoate plants when eating sprouts. (Of course, there might also be lots of fledgling plants on fields that die naturally in the process of farming big plants, but in terms of plant deaths per kg of food, sprouts still seem considerably higher.) Because I have no preference between sprouts and bigger vegetables, I decided I may as well avoid the sprouts. That said, it seems plausible that we might weight the harm of injuring a plant roughly in proportion to its size, because the leaves of a big plant can be seen as a lot like “sprouts” of their own. Plants have a fractal-like structure.

Another idea to ponder further is whether there’s such a thing as humane slaughter for plants. For example, I would conjecture that pulling a plant up by its roots might be “less painful” than cutting its stem, because uprooting the plant cuts off nutrients and water. In contrast, a plant with its stem cut off may send damage signals to remaining tissues? Of course, even an uprooted plant would survive for some time on its existing nourishment, similar to how the head of a chicken still blinks and opens its mouth in terror even after being severed from the body.”

[1] This seems like a general problem with morally uncertain Pascal’s wagers that get framed in ratio terms; at the end of the article Brian suggests sidestepping these by incorporating potentially-important-organisms into a single utility function with some reasonable-seeming weighting.

Okay, but why does complexity of behavior matter? Like, doesn’t that lead to bad moral rules too, like “the Ashley x surgeries were totally fine because that human doesn’t complex behaviors like nondisabled humans do?”

Basically, if the idea is that a creature has a standpoint, I don’t understand why we give some standpoints weight (the creatures we are likely to survive without eating, though vitamin deficiencies are a problem for people who do it wrong or who are further restricted in diets) but not others (the creatures we’re much less likely to survive without consuming bits of.)

It all seems to me to be a convenient sidestep for “Well, but would you choose to be an animal if you had a choice?”

I mean if you photosynthesize you’re not harming any creature whatsoever. But we can’t do that, sadly.

A reductio: you wouldn’t feel bad for eating a sugar cube. (Probably! If you would, that’s a different conversation.)

I think it’s pretty natural to believe that a being must have some inner life (“something it’s like to be a plant or ant or antelope”) in order to be morally relevant. Complex behavior is an indication of possible complex mental processes—featuring a robust model of the world that, if sufficiently complex, might include the ability to suffer and to enjoy. (I’m no cognitive scientist, but I think this correlation is broadly accepted?)

There’s also the further ability of being able to think (sapience vs sentience). This seems like what’s particularly important for issues of consent, which is part of what’s at play in the Ashley X case. There’s both the question of what would improve her quality of life from an expected physical pain perspective (which seems to be what her parents and doctors were considering), and the question of what she would prefer based on her senses of identity and dignity (which seems to be what critics are focused on).

So I think the Ashley thing is mostly a side point—yes, there are bullets to bite if you consider gradations of moral value based on complexity of inner life. People accept all sorts of terrible things when they don’t consider this stuff, though: not going out of your way to avoid eating animals raised in bad conditions is accepting a tradeoff between their preferences and yours. And the Ashley case is largely about higher-thinking-based preferences, which most people would agree aren’t a thing for plants.

So basically: moral weight ~ internal life + higher thought, both of which heavily correlate with complex behavior. That’s a couple of steps, but I do think most people would accept each of them. People discriminate between rocks and people, which means you ~have to have some monotonously increasing function of moral worth as “inner life” stuff increases. It’s totally consistent to assign animals and Ashley nontrivial weight and plants trivial weight, and I suspect most people do so.

(PS - I don’t understand what your “convenient sidestep” comment means. So maybe I’m missing some of your POV?)

“I think it’s pretty natural to believe that a being must have some inner life (“something it’s like to be a plant or ant or antelope”) in order to be morally relevant.”

This is the core of our disagreement. I think moral relevance takes many forms. I think beings with inner lives have one kind of moral relevance but other things have other sorts of moral relevance. I don’t like necrophilia, but as an atheist I don’t believe a corpse has an inner life. Why not? Because there is some other value that a corpse has, such that defiling it is not a good thing to do. (Burying people in mass graves is another—I don’t think you have to know who someone’s family is for it to be better to bury them/perform rites for them than to toss them in a pit with other dead.)

The idea that only living beings with inner lives have moral value is bizarre to me, and probably why I’m not a strict consequentialist.

Gotcha. Good to have found a crux.

I do find that axiom difficult to square with feelings like that. There are hacks like “when they were alive, my grandfathers would have felt better to know I was the kind of person who would sit by the sacred fire and say the words”, but they feel...well, hacky, grandfathers not being perfect predictors and all.

So yeah, those instincts do feel hard to source to a moral patient. (They do seem like things you could incorporate as consequences in your consequentialism, but maybe not in a very elegant way.)

Avatar
reblogged

Also pretty much every time I say this I just get weird looks but one of the reasons I don’t find myself convinced by the usual arguments used to try to convince me to go vegan is how… nakedly utilitarian they are. They’re like, “animals feel pain and pleasure. So don’t kill them! Plants don’t! So you can eat those!”

But the thing that always gives me pause there is I think of those plants whose leaves close up when they are jostled or otherwise attacked. That may not be a nervous system similar to mine but it looks like nonconsent, and if the idea is that animals shouldn’t be killed and eaten because they don’t like it, then… what happens if (I think when) we discover that plants can be meaningfully said not to like it either?

Then if we’re utilitarians, the question goes from “don’t cause suffering” to “well, the plant’s suffering doesn’t count as much as you starving,” which… well, maybe, but that seems… weirdly off point to me somehow?

Also if “don’t cause suffering” is the moral maxim, doesn’t that Create problems for other moral dilemmas, like being pro-choice? If you’ve reached the point at which this fetus can suffer, and suffering is nonconsent, and nonconsent is wrong, I guess you’re stuck gestating?

The counter argument seems to be that plants grow fruit specifically to be edible. Maybe it’s pleasant to have bits of you ripped off and devoured, I don’t know.

But as an argument against something called “speciesism” it’s never seemed great to me. Are we acknowledging how little we understand about how plants work and whether or not they have experiences?

I’d guess plants’ internal structure for conducting large-scale behaviors is a lot simpler than most animals’ (no strong analogue to the nervous system). The range of behaviors they need to select from is certainly less complex. So I don’t think it’s crazy to assume they’re much less likely to be moral patients, and/or have much smaller moral weight. (I think this is less plausible for animals, and if animals are moral patients then they clearly have a lot of extremely bad experiences in factory farms.)

If the question is meat vs not-meat, it seems hard to compare the effect-on-plants, since obviously animals need a ton of plant matter. I believe in terms of total plant matter input, a pound of meat requires nontrivially more than a pound of not-meat? Ideally you’d have heuristics on diagnosing + aggregating plants’ pain, but “meat causes animals pain, and it’s not at all obviously better for plants” seems like a reasonable first-order conclusion.

If you’re curious about a panpsychist pro-vegan POV, Brian Tomasik is the person I’d recommend. He talks about plants here.

Okay, I read Brian’s discussion of plant suffering here. Basically:

* plants seem likely to have low moral value per organism, but there are so many of them that they might be significant in aggregate

* if you think of their moral value in terms of a ratio to animal or human value, you run into a weird “two-envelope problem” situation where the relative importance isn’t well-defined. Bummer. [1]

* like I said, vegetarianism kills fewer plants

* wild plant suffering would be the dominant concern. As with wild animal suffering, this suggests reducing biosphere.

* plants seem to have a slower “clock speed” (reaction time) than most animals. Another reason to downweigh their value.

And of course some fun Tomasikian hypothesizing:

“There could be extremely minor areas of practical significance. For instance, I wonder if seed sprouts cause more harm than eating bigger plants because you kill so many inchoate plants when eating sprouts. (Of course, there might also be lots of fledgling plants on fields that die naturally in the process of farming big plants, but in terms of plant deaths per kg of food, sprouts still seem considerably higher.) Because I have no preference between sprouts and bigger vegetables, I decided I may as well avoid the sprouts. That said, it seems plausible that we might weight the harm of injuring a plant roughly in proportion to its size, because the leaves of a big plant can be seen as a lot like “sprouts” of their own. Plants have a fractal-like structure.

Another idea to ponder further is whether there’s such a thing as humane slaughter for plants. For example, I would conjecture that pulling a plant up by its roots might be “less painful” than cutting its stem, because uprooting the plant cuts off nutrients and water. In contrast, a plant with its stem cut off may send damage signals to remaining tissues? Of course, even an uprooted plant would survive for some time on its existing nourishment, similar to how the head of a chicken still blinks and opens its mouth in terror even after being severed from the body.”

[1] This seems like a general problem with morally uncertain Pascal’s wagers that get framed in ratio terms; at the end of the article Brian suggests sidestepping these by incorporating potentially-important-organisms into a single utility function with some reasonable-seeming weighting.

Okay, but why does complexity of behavior matter? Like, doesn’t that lead to bad moral rules too, like “the Ashley x surgeries were totally fine because that human doesn’t complex behaviors like nondisabled humans do?”

Basically, if the idea is that a creature has a standpoint, I don’t understand why we give some standpoints weight (the creatures we are likely to survive without eating, though vitamin deficiencies are a problem for people who do it wrong or who are further restricted in diets) but not others (the creatures we’re much less likely to survive without consuming bits of.)

It all seems to me to be a convenient sidestep for “Well, but would you choose to be an animal if you had a choice?”

I mean if you photosynthesize you’re not harming any creature whatsoever. But we can’t do that, sadly.

A reductio: you wouldn’t feel bad for eating a sugar cube. (Probably! If you would, that’s a different conversation.)

I think it’s pretty natural to believe that a being must have some inner life (“something it’s like to be a plant or ant or antelope”) in order to be morally relevant. Complex behavior is an indication of possible complex mental processes—featuring a robust model of the world that, if sufficiently complex, might include the ability to suffer and to enjoy. (I’m no cognitive scientist, but I think this correlation is broadly accepted?)

There’s also the further ability of being able to think (sapience vs sentience). This seems like what’s particularly important for issues of consent, which is part of what’s at play in the Ashley X case. There’s both the question of what would improve her quality of life from an expected physical pain perspective (which seems to be what her parents and doctors were considering), and the question of what she would prefer based on her senses of identity and dignity (which seems to be what critics are focused on).

So I think the Ashley thing is mostly a side point—yes, there are bullets to bite if you consider gradations of moral value based on complexity of inner life. People accept all sorts of terrible things when they don’t consider this stuff, though: not going out of your way to avoid eating animals raised in bad conditions is accepting a tradeoff between their preferences and yours. And the Ashley case is largely about higher-thinking-based preferences, which most people would agree aren’t a thing for plants.

So basically: moral weight ~ internal life + higher thought, both of which heavily correlate with complex behavior. That’s a couple of steps, but I do think most people would accept each of them. People discriminate between rocks and people, which means you ~have to have some monotonously increasing function of moral worth as “inner life” stuff increases. It’s totally consistent to assign animals and Ashley nontrivial weight and plants trivial weight, and I suspect most people do so.

(PS - I don’t understand what your “convenient sidestep” comment means. So maybe I’m missing some of your POV?)

Avatar
reblogged

Also pretty much every time I say this I just get weird looks but one of the reasons I don’t find myself convinced by the usual arguments used to try to convince me to go vegan is how… nakedly utilitarian they are. They’re like, “animals feel pain and pleasure. So don’t kill them! Plants don’t! So you can eat those!”

But the thing that always gives me pause there is I think of those plants whose leaves close up when they are jostled or otherwise attacked. That may not be a nervous system similar to mine but it looks like nonconsent, and if the idea is that animals shouldn’t be killed and eaten because they don’t like it, then… what happens if (I think when) we discover that plants can be meaningfully said not to like it either?

Then if we’re utilitarians, the question goes from “don’t cause suffering” to “well, the plant’s suffering doesn’t count as much as you starving,” which… well, maybe, but that seems… weirdly off point to me somehow?

Also if “don’t cause suffering” is the moral maxim, doesn’t that Create problems for other moral dilemmas, like being pro-choice? If you’ve reached the point at which this fetus can suffer, and suffering is nonconsent, and nonconsent is wrong, I guess you’re stuck gestating?

The counter argument seems to be that plants grow fruit specifically to be edible. Maybe it’s pleasant to have bits of you ripped off and devoured, I don’t know.

But as an argument against something called “speciesism” it’s never seemed great to me. Are we acknowledging how little we understand about how plants work and whether or not they have experiences?

I’d guess plants’ internal structure for conducting large-scale behaviors is a lot simpler than most animals’ (no strong analogue to the nervous system). The range of behaviors they need to select from is certainly less complex. So I don’t think it’s crazy to assume they’re much less likely to be moral patients, and/or have much smaller moral weight. (I think this is less plausible for animals, and if animals are moral patients then they clearly have a lot of extremely bad experiences in factory farms.)

If the question is meat vs not-meat, it seems hard to compare the effect-on-plants, since obviously animals need a ton of plant matter. I believe in terms of total plant matter input, a pound of meat requires nontrivially more than a pound of not-meat? Ideally you’d have heuristics on diagnosing + aggregating plants’ pain, but “meat causes animals pain, and it’s not at all obviously better for plants” seems like a reasonable first-order conclusion.

If you’re curious about a panpsychist pro-vegan POV, Brian Tomasik is the person I’d recommend. He talks about plants here.

Okay, I read Brian’s discussion of plant suffering here. Basically:

* plants seem likely to have low moral value per organism, but there are so many of them that they might be significant in aggregate

* if you think of their moral value in terms of a ratio to animal or human value, you run into a weird “two-envelope problem” situation where the relative importance isn’t well-defined. Bummer. [1]

* like I said, vegetarianism kills fewer plants

* wild plant suffering would be the dominant concern. As with wild animal suffering, this suggests reducing biosphere.

* plants seem to have a slower “clock speed” (reaction time) than most animals. Another reason to downweigh their value.

And of course some fun Tomasikian hypothesizing:

“There could be extremely minor areas of practical significance. For instance, I wonder if seed sprouts cause more harm than eating bigger plants because you kill so many inchoate plants when eating sprouts. (Of course, there might also be lots of fledgling plants on fields that die naturally in the process of farming big plants, but in terms of plant deaths per kg of food, sprouts still seem considerably higher.) Because I have no preference between sprouts and bigger vegetables, I decided I may as well avoid the sprouts. That said, it seems plausible that we might weight the harm of injuring a plant roughly in proportion to its size, because the leaves of a big plant can be seen as a lot like "sprouts" of their own. Plants have a fractal-like structure.

Another idea to ponder further is whether there's such a thing as humane slaughter for plants. For example, I would conjecture that pulling a plant up by its roots might be "less painful" than cutting its stem, because uprooting the plant cuts off nutrients and water. In contrast, a plant with its stem cut off may send damage signals to remaining tissues? Of course, even an uprooted plant would survive for some time on its existing nourishment, similar to how the head of a chicken still blinks and opens its mouth in terror even after being severed from the body.”

[1] This seems like a general problem with morally uncertain Pascal’s wagers that get framed in ratio terms; at the end of the article Brian suggests sidestepping these by incorporating potentially-important-organisms into a single utility function with some reasonable-seeming weighting.

Avatar
reblogged

Also pretty much every time I say this I just get weird looks but one of the reasons I don’t find myself convinced by the usual arguments used to try to convince me to go vegan is how… nakedly utilitarian they are. They’re like, “animals feel pain and pleasure. So don’t kill them! Plants don’t! So you can eat those!”

But the thing that always gives me pause there is I think of those plants whose leaves close up when they are jostled or otherwise attacked. That may not be a nervous system similar to mine but it looks like nonconsent, and if the idea is that animals shouldn’t be killed and eaten because they don’t like it, then… what happens if (I think when) we discover that plants can be meaningfully said not to like it either?

Then if we’re utilitarians, the question goes from “don’t cause suffering” to “well, the plant’s suffering doesn’t count as much as you starving,” which… well, maybe, but that seems… weirdly off point to me somehow?

Also if “don’t cause suffering” is the moral maxim, doesn’t that Create problems for other moral dilemmas, like being pro-choice? If you’ve reached the point at which this fetus can suffer, and suffering is nonconsent, and nonconsent is wrong, I guess you’re stuck gestating?

The counter argument seems to be that plants grow fruit specifically to be edible. Maybe it’s pleasant to have bits of you ripped off and devoured, I don’t know.

But as an argument against something called “speciesism” it’s never seemed great to me. Are we acknowledging how little we understand about how plants work and whether or not they have experiences?

I’d guess plants’ internal structure for conducting large-scale behaviors is a lot simpler than most animals’ (no strong analogue to the nervous system). The range of behaviors they need to select from is certainly less complex. So I don’t think it’s crazy to assume they’re much less likely to be moral patients, and/or have much smaller moral weight. (I think this is less plausible for animals, and if animals are moral patients then they clearly have a lot of extremely bad experiences in factory farms.)

If the question is meat vs not-meat, it seems hard to compare the effect-on-plants, since obviously animals need a ton of plant matter. I believe in terms of total plant matter input, a pound of meat requires nontrivially more than a pound of not-meat? Ideally you’d have heuristics on diagnosing + aggregating plants’ pain, but “meat causes animals pain, and it’s not at all obviously better for plants” seems like a reasonable first-order conclusion.

If you’re curious about a panpsychist pro-vegan POV, Brian Tomasik is the person I’d recommend. He talks about plants here.

Avatar

Man, for some reason I have a really hard time grokking why evidential decision theory is wrong. Like, take the XOR Blackmail problem.* You might have termites in your house, which costs $1 million to deal with. Omega finds out whether this is true, and sends you a letter that says “I’ve sent you this letter iff (you’ll respond by sending me $1k) XOR (you have termites)”. (And that is actually how they decided whether to send it.)

What do?

Okay, so I get that if you knew about this beforehand, you’d want to be the kind of person who doesn’t pay up; all paying up does is lose you $1k in situations where you don’t have termites. But conditional on getting the letter, it feels really natural to me decide to shift the situation from “I’m out $1000k” to “I’m out $1k”? For some reason I’m ~intuitively happy taking [the choice you’d want to have precommitted to] in other cases, like Newcomb (which EDT gets right), but not here.

*page 24 of the FDT paper

Avatar
reblogged
Avatar
drethelin

Occasional reminder that global warming is not an x risk or even close to one

Avatar
shlevy

Didn’t you see The Day After Tomorrow?

Our species would probably survive but our current global technological civilization might not, and I’m pretty attached to it

Avatar
argumate

being unable to coordinate around an issue as simple as this certainly isn’t a good look for the species as a whole, imagine if we were facing problems more difficult to explain than “burning coal makes CO2 warms planet like blanket” and that hadn’t been common scientific knowledge for over 100 years already.

My position is kind of a “yes, but…” one, to the general effect of “global warming will not end the human race or modern civilization in itself, but depending on the way it’s handled, it could plausibly result in the sort of political destabilization that could do so (particularly by sparking nuclear or large-scale conventional war).”

With the coda that rationalist discussions could stand a bit more nuance on the subject of issues that are not existential risks in and of themselves, but could snowball into x-risks when considering other factors, or dramatically increase the likelihood of other x-risks coming to pass.

“Instrumental existential risks”, as distinct from “terminal existential risks”? :P

Avatar

Some say the world will end in whiskey,

Some say it’s nice.

Now, if by whiskey you mean fire

I hold that it will light a pyre;

But if I had to judge it twice

I’ve heard enough that whiskey’s sweet

To hold that for consumption iced

Or simply neat

It will suffice.

Avatar

If they can’t afford a fancy pants meal, let them eat shorts

- Martholomew Sampsonnette

Avatar
reblogged
Avatar
itsbenedict

thinkin’ bout a dystopian world where all people are conscripted into construction work from the day they’re born, and you’re only allowed to escape such servitude if you get ripped enough to prove yourself as strong enough to command your lessers

which is to say, “in this world, it’s build or be built”

Imagine a bullshit space station dystopia on prestige tv. Everyone has to stay calm to avoid using up unnecessary oxygen. If you act up, you get shoved out an airlock. In this world...it’s chill or be chilled.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.