Avatar

canonical momentum

@canmom / canmom.tumblr.com

// game dev (subtype: technical artist)
// animator (subtype: 3dcg + anime)
// former physicist
// trans woman
// anarchist
// art blog: @canmom-art
Avatar
Anonymous asked:

Have you seen Flow? It's a very good film - gorgeous animation. It's also a very stressful film, especially if you're an animal lover.

Worth watching at least once though. Followed by a visit to some kitties.

I watched it twice, once at Annecy and once for Animation Night 198 along with the other films of Gints Zilbalodis ^^ and a couple of days after that they gave it an oscar lol

safe to say it's a fave, and it's great that it's been getting such recognition, especially from the pov of someone who's been using Blender since her early teens. and zilbalodis getting so far by working solo is v v inspiring for me to try to make more of my own blender films - hopefully I'll get the current project together to enter Revision's film compo with it!!

aniobsessive did a nice article on the european animated film production model that gave rise to Flow recently, which you might find interesting!

Avatar
reblogged
Avatar
gothhabiba

You may remember me spotlighting the Mawasi Al-Qarara Mutual Aid Project (MAQMAP) in the past. MAQMAP is now Relief for Rafah (R4R), as the organisers and inhabitants of the camp return to their homes across Gaza.

Relief for Rafah needs funding to provide clean drinking water, food, and cash assistance to families in the Al-Genina District of Rafah. Despite the ceasefire, the aid entering the strip is not meeting the needs of the Palestinians in Rafah. Clean drinking water is desperately needed as well as edible food. Not to mention, the level of displacement remains high, with many families returning to piles of rubble. We aim to step in where the international aid organizations aren't. By donating, you are helping us sustain dignity, life, and love within the community. Your contributions will go to providing food, water, and basic life neccesities. Please help in rebuilding the lives that were destroyed by supporting and sharing our work.

I am personally organising bank transfers for this organisation and have been in personal contact with the organisers for some time. I can assure you that your donations are doing important work in Rafah.

Avatar

i should do one of those 100 films things and make it entirely animated movies

Avatar

honestly the main thing i have to compare the lisdexamfetamine to is lsd, but not the actual trip, there's nothing so exciting as that - instead, the sort of afterglow period? where i was just kind of feeling like "everyone is so lovely and fascinating" and buzzing with ideas and projects again. not quite on the same degree but it feels like it's on that spectrum. since i was in kind of a lowkey exhausted funk the last couple of weeks, it's a welcome change tbh.

for some reason methylphenidate didn't seem to do much at all for me a year ago (possibly the ssri paroxetine had a suppressive effect on it, or there were other confounding factors like erratic sleep) but the lisdexamfetamine seems quite noticeable this time. it is, however, definitely not anything so straightforward as "executive function juice". more data needed. i hope i can make this work.

Avatar

man. today really ended up being an "AI day" (that's 0/2 for steering the lisdexamfetamine hyperfocus towards work and not writing long-ass posts on tumblr.gov) but this evening I went and played music with my friend, and that was very good! somehow i can still remember how to play violin, which is wild, since it's been like a decade since I stopped. even if it's not entirely unlike an erhu.

i hope you're all well!

Avatar
reblogged
Avatar
canmom

my favourite metaphors tend to be physical/computer science ones: the feedback loop, the dynamical system, the state space, the instability, the self-organising structure, the mapping from x-space to y-space, the nth-order approximation, the evolving population, the compression algorithm, the cellular automaton, the fractal structure, the soliton, entropy. as far back as 2018 i was conceiving of gender transition's relation to society as akin to bubbles in a fluid.

i like to spice it with some more occult stuff now and again, the egregore in particular (used in preference to similar, more atomic concepts like 'meme' or 'semiotic sign' or 'stand alone complex' mostly because i like the vibe) - but that's a flavour of occultism that suits this habit of thought, isn't it? a notional abstract entity that emerges from the dynamics of a complex system, such as multiple minds? i view magic mostly in this light: a human tool for apprehending the large scale behaviour of humans. as such my go-to examples of egregores are things like 'countries' or 'organisations' or even 'gender'.

anyway i don't think this is a bad way to look at the world, i think it often leads to interesting left field approaches to subjects, but just because i invoke all these sciencey concepts does not actually entail any rigour. I'm operating on the level of analogy and i don't want to pretend otherwise. i try to be careful to keep the technical definition in mind, but it's not like I'm writing differential equations down, or even that you could in a lot of cases... and i tend to dislike the rhetorical invocation of mathematical concepts when other people do it, i am kind of a hypocrite lol

i relate to this. a lot of my interest in phil of math is trying to understand the usage of analogies *within* math, in part bc i have an unjustifiable sense that a good understanding of them would carry over to insight into when/where mathematical analogies are useful outside of math... it's an informal phenomenon either way. i think their usage outside of math is very often just a rhetorical grab at epistemic authority, which a proper usage would have to thoroughly reject, clarifying rather than browbeating

I've seen the term used in PL papers, adjacent to category theory but less strict about tone than published mathematics usually is; possibly also under the influence of this paper itself, since it came out in 2004. I always took it as somewhere in the continuum of "intuitively" to "up to isomorphism", "modulo some fiddly stuff". the sense of proofs as moral or immoral makes sense, though, fitting in with Thurston's Proof and Progress in Mathematics (which she quotes) talking about mathematicians working to increase human understanding of mathematics. a moral proof won't just demonstrate that this one statement is true, it'll give you a sense of how to think about the subject matter: how to come up with the proof yourself, how to prove other related statements.

I wouldn't say morality is quite synonymous with analogy, but analogy is a powerful way to give a "why": it can transfer an idea into a domain where you have more existing intuition, or it can let you move up a layer of abstraction by giving another example of the same abstract structure. I see why she'd equate it with category theory, since category theory at base is a framework for formalizing analogies, though I think she oversteps a bit here -- I'm not convinced all analogies are categorical or that morality all comes down to them. I'd be interested to hear a reception of the concept from someone in a different field...

on a more minor note, proof as a lowest-common-denominator of understanding is definitely an important part of its role but I'd say it's also important for verifying that your intuitive understanding is actually correct. there are plenty of historical examples of intuitive concepts having subtle flaws that were revealed by putting more rigor into the proofs, most famously in analysis (e.g. a function ought to be differentiable if it's continuous) and set theory (the various paradoxes); Lakatos's Proofs and Refutations deals with this subject through the geometric case of the Euler characteristic. category theory may be in an unusual position relative to other branches?

(Also I think the Yoneda lemma is slightly immoral.)

Avatar

stay true to principle//make a break with the artist-tribe

  1. i miss drawing, i should draw something again. drawing is simply fun. and it gives you a specific way of looking at and engaging with the world. i hope we all keep drawing. I don't see any sign we're going to stop, tbh.
  2. copyright must be abolished

3. it's almost inevitable that my illustrations have been included in web scrape training datasets so honestly playing with the barn door at this point is a bit silly, the horse is in another country

that said, if you do wanna use my drawings, renders, writing or anything else (e.g. pixiv, @canmom-art, itch.io) to finetune a model or use it for image to image generation or some other AI thing and make something specifically based on them? I've come to round to the feeling that my answer is just basically go for it - please credit my contribution if it's significant, and show me what you make, exactly the same as if you did a fanart or cutout poem or collage, it's cool to play a part in someone else's project!!

tbh I'll probably help you do it if you ask, I've been planning to finetune an LLM on my blog at some point regardless.

how this is all gonna interact with copyright law is still quite unclear, but if it turns out to be relevant, then by default I release everything I'm the sole author of under CC-BY-SA-NC 4.0 International. [considering dropping that NC so it can be used on projects like Wikimedia, talk to me if that is a problem for your use case].

you may question whether that will plausibly be enforced and if I'm just farting into the wind with those stipulations - but like, hopefully that's fair to request and not too onerous. we can agree that expanding the volume of creative commons works is a good thing? and develop good habits for the post-copyright future? let's keep the viral license spreading

Avatar

oh no she's talking about AI some more

to comment more on the latest round of AI big news (guess I do have more to say after all):

chatgpt ghiblification

trying to figure out how far it's actually an advance over the state of the art of finetunes and LoRAs and stuff in image generation? I don't keep up with image generation stuff really, just look at it occasionally and go damn that's all happening then, but there are a lot of finetunes focusing on "Ghibli's style" which get it more or less well. previously on here I commented on an AI video model generation that patterned itself on Ghibli films, and video is a lot harder than static images.

of course 'studio Ghibli style' isn't exactly one thing: there are stylistic commonalities to many of their works and recurring designs, for sure, but there are also details that depend on the specific character designer and film in question in large and small ways (nobody is shooting for My Neighbours the Yamadas with this, but also e.g. Castle in the Sky does not look like Pom Poko does not look like How Do You Live in a number of ways, even if it all recognisably belongs to the same lineage).

the interesting thing about the ghibli ChatGPT generations for me is how well they're able to handle simplification of forms in image-to-image generation, often quite drastically changing the proportions of the people depicted but recognisably maintaining correspondence of details. that sort of stylisation is quite difficult to do well even for humans, and it must reflect quite a high level of abstraction inside the model's latent space. there is also relatively little of the 'oversharpening'/'ringing artefact' look that has been a hallmark of many popular generators - it can do flat colour well.

the big touted feature is its ability to place text in images very accurately. this is undeniably impressive, although OpenAI themeselves admit it breaks down beyond a certain point, creating strange images which start out with plausible, clean text and then it gradually turns into AI nonsense. it's really weird! I thought text would go from 'unsolved' to 'completely solved' or 'randomly works or doesn't work' - instead, here it feels sort of like the model has a certain limited 'pipeline' for handling text in images, but when the amount of text overloads that bandwidth, the rest of the image has to make do with vague text-like shapes! maybe the techniques from that anthropic thought-probing paper might shed some light on how information flows through the model.

similarly the model also has a limit of scene complexity. it can only handle a certain number of objects (10-20, they say) before it starts getting confused and losing track of details.

as before when they first wired up Dall-E to ChatGPT, it also simply makes prompting a lot simpler. you don't have to fuck around with LoRAs and obtuse strings of words, you just talk to the most popular LLM and ask it to perform a modification in natural language: the whole process is once again black-boxed but you can tell it in natural language to make changes. it's a poor level of control compared to what artists are used to, but it's still huge for ordinary people, and of course there's nothing stopping you popping the output into an editor to do your own editing.

not sure the architecture they're using in this version, if ChatGPT is able to reason about image data in the same space as language data or if it's still calling a separate image model... need to look that up.

openAI's own claim is:

We trained our models on the joint distribution of online images and text, learning not just how images relate to language, but how they relate to each other. Combined with aggressive post-training, the resulting model has surprising visual fluency, capable of generating images that are useful, consistent, and context-aware.

that's kind of vague. not sure what architecture that implies. people are talking about 'multimodal generation' so maybe it is doing it all in one model? though I'm not exactly sure how the inputs and outputs would be wired in that case.

anyway, as far as complex scene understanding: per the link they've cracked the 'horse riding an astronaut' gotcha, they can do 'full glass of wine' at least some of the time but not so much in combination with other stuff, and they can't do accurate clock faces still.

normal sentences that we write in 2025.

it sounds like we've moved well beyond using tools like CLIP to classify images, and I suspect that glaze/nightshade are already obsolete, if they ever worked to begin with. (would need to test to find out).

all that said, I believe ChatGPT's image generator had been behind the times for quite a long time, so it probably feels like a bigger jump for regular ChatGPT users than the people most hooked into the AI image generator scene.

of course, in all the hubbub, we've also already seen the white house jump on the trend in a suitably appalling way, continuing the current era of smirking fascist political spectacle by making a ghiblified image of a crying woman being deported over drugs charges. (not gonna link that shit, you can find it if you really want to.) it's par for the course; the cruel provocation is exactly the point, which makes it hard to find the right tone to respond. I think that sort of use, though inevitable, is far more of a direct insult to the artists at Ghibli than merely creating a machine that imitates their work. (though they may feel differently! as yet no response from Studio Ghibli's official media. I'd hate to be the person who has to explain what's going on to Miyazaki.)

google make number go up

besides all that, apparently google deepmind's latest gemini model is really powerful at reasoning, and also notably cheaper to run, surpassing DeepSeek R1 on the performance/cost ratio front. when DeepSeek did this, it crashed the stock market. when Google did... crickets, only the real AI nerds who stare at benchmarks a lot seem to have noticed. I remember when Google releases (AlphaGo etc.) were huge news, but somehow the vibes aren't there anymore! it's weird.

I actually saw an ad for google phones with Gemini in the cinema when i went to see Gundam last week. they showed a variety of people asking it various questions with a voice model, notably including a question on astrology lmao. Naturally, in the video, the phone model responded with some claims about people with whatever sign it was. Which is a pretty apt demonstration of the chameleon-like nature of LLMs: if you ask it a question about astrology phrased in a way that implies that you believe in astrology, it will tell you what seems to be a natural response, namely what an astrologer would say. If you ask if there is any scientific basis for belief in astrology, it would probably tell you that there isn't.

In fact, let's try it on DeepSeek R1... I ask an astrological question, got an astrological answer with a really softballed disclaimer:

Individual personalities vary based on numerous factors beyond sun signs, such as upbringing and personal experiences. Astrology serves as a tool for self-reflection, not a deterministic framework.

Ask if there's any scientific basis for astrology, and indeed it gives you a good list of reasons why astrology is bullshit, bringing up the usual suspects (Barnum statements etc.). And of course, if I then explain the experiment and prompt it to talk about whether LLMs should correct users with scientific information when they ask about pseudoscientific questions, it generates a reasonable-sounding discussion about how you could use reinforcement learning to encourage models to focus on scientific answers instead, and how that could be gently presented to the user.

I wondered if I'd asked it instead to talk about different epistemic regimes and come up with reasons why LLMs should take astrology into account in their guidance. However, this attempt didn't work so well - it started spontaneously bringing up the science side. It was able to observe how the framing of my question with words like 'benefit', 'useful' and 'LLM' made that response more likely. So LLMs infer a lot of context from framing and shape their simulacra accordingly. Don't think that's quite the message that Google had in mind in their ad though.

I asked Gemini 2.0 Flash Thinking (the small free Gemini variant with a reasoning mode) the same questions and its answers fell along similar lines, although rather more dry.

So yeah, returning to the ad - I feel like, even as the models get startlingly more powerful month by month, the companies still struggle to know how to get across to people what the big deal is, or why you might want to prefer one model over another, or how the new LLM-powered chatbots are different from oldschool assistants like Siri (which could probably answer most of the questions in the Google ad, but not hold a longform conversation about it).

some general comments

The hype around ChatGPT's new update is mostly in its use as a toy - the funny stylistic clash it can create between the soft cartoony "Ghibli style" and serious historical photos. Is that really something a lot of people would spend an expensive subscription to access? Probably not. On the other hand, their programming abilities are increasingly catching on.

But I also feel like a lot of people are still stuck on old models of 'what AI is and how it works' - stochastic parrots, collage machines etc. - that are increasingly falling short of the more complex behaviours the models can perform, now prediction combines with reinforcement learning and self-play and other methods like that. Models are still very 'spiky' - superhumanly good at some things and laughably terrible at others - but every so often the researchers fill in some gaps between the spikes. And then we poke around and find some new ones, until they fill those too.

I always tried to resist 'AI will never be able to...' type statements, because that's just setting yourself up to look ridiculous. But I will readily admit, this is all happening way faster than I thought it would. I still do think this generation of AI will reach some limit, but genuinely I don't know when, or how good it will be at saturation. A lot of predicted 'walls' are falling.

My anticipation is that there's still a long way to go before this tops out. And I base that less on the general sense that scale will solve everything magically, and more on the intense feedback loop of human activity that has accumulated around this whole thing. As soon as someone proves that something is possible, that it works, we can't resist poking at it. Since we have a century or more of science fiction priming us on dreams/nightmares of AI, as soon as something comes along that feels like it might deliver on the promise, we have to find out. It's irresistable.

AI researchers are frequently said to place weirdly high probabilities on 'P(doom)', that AI research will wipe out the human species. You see letters calling for an AI pause, or papers saying 'agentic models should not be developed'. But I don't know how many have actually quit the field based on this belief that their research is dangerous. No, they just get a nice job doing 'safety' research. It's really fucking hard to figure out where this is actually going, when behind the eyes of everyone who predicts it, you can see a decade of LessWrong discussions framing their thoughts and you can see that their major concern is control over the light cone or something.

Avatar
reblogged
Avatar
canmom

stupid metaphor time

transing your gender as chemical reaction. certain things catalyse it (lowering the energy barrier and making it more likely) accounting for common pretransition experiences, but don't guarantee it.

since detransition is also an option we could imagine eventually reaching a dynamic equilibrium where the number of people transitioning and detransitioning is equal!

being trans generally seems to be a lower energy state than being cis tho

anyway this fails because (among other reasons) humans are a lot more complicated than molecules (partly on account of being made of lots of molecules), and have far more equilibrium states and moves between them than chemical species do. trying to reduce it all to a single axis is Overly Reductive(TM). the fluid dynamics metaphor was better

Avatar
reblogged
Avatar
canmom

'ello ello!

OK, this is really funny-stupid actually, I just looked back at some logs and I realised that 'voyantvoid the cool person who helps me learn about AI stuff on tumblr' and 'LML, the cool person who's been running a great Nechronica game all this time on discord' were the same person all along. See previous post re: my memory being shit. That's really funny.

But yes, indeed, you've been incredibly helpful with learning about all this AI stuff - without you and @cherrvak I'd have been totally at sea, and I'm very grateful for your willingness to take time getting me up to speed, and engage with all my dorky little experiments like trying to get the LLMs to write graphics code. So under your tumblr persona I mostly associate you with chatting about AI x3 I think you're the person who made me aware of lambdachat, which has become my go-to provider for DeepSeek R1, and enabled a lot of my further exploration.

I find your experiments with diffusion models interesting - I appreciate you posting the prompts and models you use. Honestly, it's funny, but you pulling up AI-gen images for backgrounds and tokens in our RPG games on the fly has both broadened my sense of what sort of images can be reached by the models, and also pushed me through some of the cognitive dissonance I have about them - there's nothing like meeting someone engaged in a practice to dispel illusions about it. At some point before we wrap up the game, though, I must make a drawing of our characters in my way!

We never did end up playing Jennagames, which was the reason you messaged me in the first place! Maybe we will at some point. Regardless, you are a friend I am very happy to have made~ I super admire how much passion you put into exploring different TTRPGs, and in the specific one I've played with you, you're very good at cooking up spicy, cruel and interesting situations for our poor girls that get straight to the core conflicts of the setting we've put together. Look forward to playing many more games after this one wraps ^^

Avatar

Haha, I think you can safely blame my habit of using a different handle for each site over your memory!

It's a pleasure to share things with someone so curious and willing to experiment, I really admire how you throw yourself into learning about a topic and then write up a great explainer afterwards. (Speaking of LLM stuff, have you seen the latest Anthropic paper? Mechanistic interpretability doesn't tend to replicate very well but it feels like there's lots of potential there).

I'd love to see your take on the cast, I should have a go the old-fashioned way (drawing tablet) myself before the game wraps up. And a Jennagame of some kind is on the list to play next!

I have not, will read soon! I'm still on the prev openai paper about how their model will openly discuss cheating in its chain of thought, but reinforcement learning on the CoT just teaches the model to hide what it's doing. (It took me way too long to find that because in my head it was an anthropic papper.)

the AI-safety crowd are already all over it (zvi mowshowitz, who led me to it, called it the 'most forbidden technique'), but what I find interesting is that, before it's trained to hide, the model says stuff in its CoT like:

So analyze functions used in analyze and verify. But tests only call verify and assert ok. So we need implement analyze polynomial completely? Many details. Hard.

and then goes on to figure out that it can hack the test case instead of solving it properly. and I'm wondering like. why does it do that?

in a human, something is hard because it takes a lot of thought or difficult emotions, which takes energy, and we are living organisms who only have so much energy to go around. so we have a very natural incentive to try to find easier ways to do things. we are often 'lazy' - and that laziness has been very good for us!

so far as I know, though, when they go through reinforcement learning, AIs are not trained to do things more efficiently, just to get the right answer at the end. so why would an AI care about doing things in an easy way rather than a hard way? is this a sign that OpenAI was applying 'length of chain of thought/answer' as a training criterion to save on inference costs? or is it just picking up that this is a thing that humans sometimes do in its training set, and stochastically sometimes doing it when it's asked to solve problems? is it 'hard' in that the model is simply less likely to get to the right answer, so it looks for a more reliable method? ...tbh it's probably that actually.

I'll check out that Anthropic paper, maybe it will have some other interesting behaviours! 'what AI reveals about the structure of thought in general' is one of the most interesting aspects of this whole thing to me, so I'm excited to see what they've discovered.

Avatar
reblogged
Avatar
canmom

I wrote ~4.5k words about the operating of LLMs, as the theory preface to a new programming series. Here's a little preview of the contents:

As with many posts, it's written for someone like 'me a few months ago': curious about the field but not yet up to speed on all the bag of tricks. Here's what I needed to find out!

But the real meat of the series will be about getting hands dirty with writing code to interact with LLM output, finding out which of these techniques actually work and what it takes to make them work, that kind of thing. Similar to previous projects with writing a rasteriser/raytracer/etc.

I would be very interesting to hear how accessible that is to someone who hasn't been mainlining ML theory for the past few months - whether it can serve its purpose as a bridge into the more technical side of things! But I hope there's at least a little new here and there even if you're already an old hand.

Avatar

stupid metaphor time

transing your gender as chemical reaction. certain things catalyse it (lowering the energy barrier and making it more likely) accounting for common pretransition experiences, but don't guarantee it.

since detransition is also an option we could imagine eventually reaching a dynamic equilibrium where the number of people transitioning and detransitioning is equal!

being trans generally seems to be a lower energy state than being cis tho

Avatar

So, I'm a big fan of what I'm gonna call "the baeddel move" of shifting from trans women being women because of identifying as such and those being defined into inclusion as targets of misogyny etc at the level of identity to being like, "no but literally we've been hyper-targeted for (sexual) harassment since well before transitioning this shit is material and experiential.

But I do think it's important to note that this kind of camab-camab hell dynamic isn't confined to people who will go on to develop a trans identity. Like it's easy to say that other people realized you were a faggot before you did, that they sniffed you out, etc. As if you had the essence of faggotry within yourself, others identified it / it caused you to be faggot socialized, and then you eventually came to identify it.

But as an anti-essentialist it's important to say that human souls don't come in discrete types. (How could they when they don't exist?)

There are people cast deep into the faggot category who don't reject manhood (or even heterosexuality) and people who aren't particularly marked as faggots who transition. But this isn't to say that being marked and being trans aren't deeply related.

To sketch a rough model, I'd say that hierarchic masculinity --the kind at play in high pressure social environments like most schools, sports teams, etc-- is animated by a tension between the unattainable ideals of masculinity and each individual who is measured against them. One's masculinity is always "insecure" because there is always a gap and others can try and pry that gap wide open. *Everyone* gets called a faggot.

But if everyone is included in this, they're included in wildly different ways. From sometimes being marked, to feeling the fear that you might become marked, from feeling solidly unmarked but like if your inner fantasies were known you'd be marked, to being marked with unerring consistency. For some this is contained to the realm of anxiety and for some it is violently material. And whatever their positions, individuals choose from dozens of strategies for navigating this shit.

Transfem identity is deeply wrapped up in these positions and these questions of strategy. I'm not gonna take issue with people having their narrative of how they were never really "one of the boys." (Girls especially would constantly talk about how I didn't count as a boy). To borrow from classic queer theory, when you exceptionalize and essentialize faggotry you give credence to the lie that there are people who are actually men and not-faggots.

And, on the political if not the individual level, I don't think we should ignore the tactic of tearing the gaps between ideal masculinity and its supposed instatiations wide open. It worked great for me!

(NB I think something along these lines is probably way closer to a good model for cis-misogyny than the idea of a force equally acting on all cis women like none of this is trying to say transfem experience is wildly different from cis misogyny models of cis misogyny just also need a lot of work and fixing them isn't near the top of my priorities list rn)

Avatar
Avatar
timequangle

every single discussion about the fucking signal groupchat makes me feel so insane. "what a display of incompetence! what a failure! let's all make accidental groupchat mistake jokes now" what the fuck are you talking about. it worked. the fact that THIS is the conversation now is literally the point. jeffrey goldberg literally did it again. selling the bombing of the middle east to the public is the entire purpose of his career as a "journalist"

former iof prison guard who spent the past year fully deepthroating the genocidal boot and famously sold the invasion of iraq as something that "will be remembered as an act of profound morality"? "journalist" who literally built his career on manufacturing consent for bombing arabs "accidentally" invited to a top secret group chat about bombing arabs oh no how could this happen? what are you TALKING about. fork found in kitchen! likely place for him to be! my god

Jeffrey Goldberg, the editor-in-chief of the Atlantic, has been at the center of a national story after he was “inadvertently” included in a group Signal chat with administration officials as they planned a deadly bombing in Yemen. Much of the coverage has focused on the mishandling of military secrets, rather than the impact of the bombings themselves, targeting the poorest country in the Middle East, which the United States has helped bomb and blockade for over a decade. Goldberg is not just an observer: He is contributing to this disregard for Yemeni lives, and his dismissiveness sheds light on why he was an administration media contact to begin with.
In an interview that aired on March 26, Deepa Fernandes, one of the hosts of NPR's “Here and Now,” interviewed Goldberg about the “group chat heard 'round the world” that included Defense Secretary Pete Hegseth, National Security Adviser Mike Waltz, and Vice President JD Vance. During one portion of the interview, Fernandes did something few other journalists are doing. She asked Goldberg about the Yemeni people who were killed in the bombing, which took place on March 15.
Deepa Fernandes: There's little talk of the fact that this attack killed 53 people, as we mentioned, including women and children. The civilian toll of these American strikes. Are we burying the lede here?
Jeffrey Goldberg: Well, those, unfortunately, those aren't confirmed numbers. Those are provided by the Houthis and the Houthi health ministry, I guess. So we don't know that for sure. Yeah, I mean, obviously we're, well, I don't know if we're burying the lede, because obviously huge breaches in national security and safety of information, that's a very, very important story, obviously. And one of the reasons, you know, it's a very important story is that the Republicans themselves consider that to be an important story, when it's Hillary Clinton doing the deed, right? So that's obviously hugely important.
But yeah, I think that covering what's going on in Yemen, the Arab and Iran backed terrorist organization, the Houthis, that are that are firing missiles at Israel and disrupting global shipping and occupy half of Yemen, and all kinds of other things in the US, you know, and the Trump administration criticizing the Biden's response and Europe wants Trump to do more. I mean, yeah, there's, there's a huge story in Yemen. But Yemen is, as you know, is one of the more inaccessible places for Western journalists. So maybe this becomes like a substitute for a discussion of Yemen. I don't know.
Goldberg not only seems unconcerned about the death toll and eager to cast doubt on its veracity, but he also appears unprepared for the question. It’s as though it didn’t occur to him that the substance of the Signal exchange itself—the bombing—might be a legitimate topic of conversation, and he seems eager to move on.
This is despite the fact that there is evidence in the exchange itself that the United States hit a civilian site in the bombing. Waltz wrote in the Signal chat that the US military had bombed a residential building. “The first target—their top missile guy—we had positive ID of him walking into his girlfriend’s building and it’s now collapsed,” Waltz wrote in the chat, to which JD Vance replied: “Excellent.”
Yet, as Nick Turse noted for The Intercept, “So far, however, there has been little focus on the specifics of the attack, much less discussion of the fact that one of the targets of the March 15 strike was a civilian residence.”
The story of US belligerence in Yemen should be a huge one. Since 2015, the US-Saudi coalition has used American manufactured bombs to hit wedding parties, factories, a school bus, and a center for the blind. It’s difficult to know the exact death toll, but around three years ago, the death toll from direct and indirect consequence of war surpassed 377,000. Direct bombings by both the Biden and Trump administrations threaten a wider war, and have occurred in lockstep with US support for Israel as it has ruthlessly bombed and attacked Gaza since October 7.
Goldberg, of course, was included in that group chat because he was a contact of someone on the administration’s thread, and his history of laundering the US military’s mass atrocities is a good indicator of why. In the lead-up to the US-led war on Iraq, Goldberg was central to peddling the disproven conspiracy theory that Iraq had ties to al-Qaeda, a key lie of the George W. Bush administration, used to justify the invasion. One month before the US started the war, he went on NPR to discuss “Possible Links Between Iraq and al Qaeda and Evidence That the Iraqis May be Trying to Evade Weapons Inspectors.”
Goldberg has a long career of uplifting the media narratives of the United States and its allies, including a big piece in 2010 where he floated justifications for a possible Israeli war on Iran. Like many of the Iraq War pushers, Goldberg’s lies about Iraq did not harm his career, but marked its ascent. Under his tenure, the Atlantic has shut out Palestinian voices and stories, as the US has helped Israel wage genocide in Gaza.
Goldberg’s dismissal of Yemeni deaths is not a small detail of this blockbuster story, but a central component. One way to get on the speed dials of high-level officials is to have a proven career of doing their bidding.
As we see wall-to-wall coverage of the Signal leaks on supposed liberal networks like MSNBC, it’s important to remember that the primary scandal is the bombing of Yemen, a reality that the network has long obscured. As The Column’s Adam Johnson noted in July 2018, at that point it had been a year since MSNBC had mentioned the US backed destruction of Yemen. Yet during that same period, MSNBC had done 455 segments on the Trump-Stormy Daniels affair. As media reports and House Intelligence Committee hearings ignore the human toll of US military attacks, we continue to see the ascent of those who have built their careers on directing public attention away from the people the United States kills.
Avatar
reblogged
Avatar
canmom

my favourite metaphors tend to be physical/computer science ones: the feedback loop, the dynamical system, the state space, the instability, the self-organising structure, the mapping from x-space to y-space, the nth-order approximation, the evolving population, the compression algorithm, the cellular automaton, the fractal structure, the soliton, entropy. as far back as 2018 i was conceiving of gender transition's relation to society as akin to bubbles in a fluid.

i like to spice it with some more occult stuff now and again, the egregore in particular (used in preference to similar, more atomic concepts like 'meme' or 'semiotic sign' or 'stand alone complex' mostly because i like the vibe) - but that's a flavour of occultism that suits this habit of thought, isn't it? a notional abstract entity that emerges from the dynamics of a complex system, such as multiple minds? i view magic mostly in this light: a human tool for apprehending the large scale behaviour of humans. as such my go-to examples of egregores are things like 'countries' or 'organisations' or even 'gender'.

anyway i don't think this is a bad way to look at the world, i think it often leads to interesting left field approaches to subjects, but just because i invoke all these sciencey concepts does not actually entail any rigour. I'm operating on the level of analogy and i don't want to pretend otherwise. i try to be careful to keep the technical definition in mind, but it's not like I'm writing differential equations down, or even that you could in a lot of cases... and i tend to dislike the rhetorical invocation of mathematical concepts when other people do it, i am kind of a hypocrite lol

i relate to this. a lot of my interest in phil of math is trying to understand the usage of analogies *within* math, in part bc i have an unjustifiable sense that a good understanding of them would carry over to insight into when/where mathematical analogies are useful outside of math... it's an informal phenomenon either way. i think their usage outside of math is very often just a rhetorical grab at epistemic authority, which a proper usage would have to thoroughly reject, clarifying rather than browbeating

oh this is fascinating! I'd not heard that specific language used, but it immediately made intuitive sense to me why cheng might consider the long complicated opaque proof of Fermat's last theorem as not being moral, and the logic of many of her other examples, like the quadratic equation being moral to derive from completing the square but not substitution.

her 'morality' seems to have a lot to do with 'good pedagogy'. i get the impression what she calls a 'moral' reason is one that gives you real understanding of the broader picture around the thing you're investigating. it makes me think of 3blue1brown's various 'why is there pi here' videos, where he tries to find a route reveal how a circle or sphere enters into a problem in some not immediately obvious way. it's not enough to present a valid proof: a really good argument must feel satisfying and revealing, not obtuse and arbitrary.

cheng's comment about mathematical beliefs existing mentally in a non-symbolic fashion is also really interesting - comes back to stuff I'd been thinking about the 'embedding' translation from symbol strings to latent space vectors in neural nets, and the corresponding process that happens in our brains to translate signals into internal neuron-excitation oscillations. (see: analogistically). it's fascinating to consider that applied to mathematics where symbol-manipulation is so central. but it makes sense: even if we're manipulating expressions we combine it with an intuition for what the expressions represent, and what it would be sensible to do with them. there's a reason they encourage us to always roughly plot it out, figure out where the poles and zeros are. why visual proofs feel particularly satisfying.

i feel like a lot of thinking must involve (invoking some of my favourite metaphors, watch out) repeatedly mapping something between different representations to do different manipulations to it and allow it to evolve. e.g. a string of algebraic symbols, and a mental representation of the shape those symbols, different terms as little clumps to poke and prod. or a vague intuitive idea and a spelled-out sentence. this is why writing is helpful for working out thoughts: the process of translation from the mental representation and linguistic representation and back is somehow clarifying, perhaps exciting other related connections in the process.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.