Avatar

Untitled

@2357911131719

Avatar
Avatar
chirasul

tumblr is doing the same thing to my little pony as it did to breaking bad but in the opposite direction

tumblr breaking bad: jessy we got to get your t refilled before the rite aid closes or it's going to ruin our weekend antiqueing plans

tumblr my little pony: uh oh guys! rainbow dash is threatening to set a taco bell on fire because the city finally shut off her electricity

Avatar
reblogged

i don't understand the recent "how often do men think of the roman empire" trend. one of the most well-known recent academic and popular historians of the roman empire is a woman, mary beard. why do this? it isn't cute or funny if you think about the implications for more than five seconds

"it's so crazy how men think about the roman empire, or the civil war, or history so often! i don't do that because i'm a woman. i think about poetry and literature instead, which men don't do" please stop this new girl dinner but for intellectual pursuits

is the problem here that women are not thinking about the roman empire often enough or that they are saying out loud how seldom they do

the problem here is that we're siloing interests by gender and then reinforcing them with things like these trends

"don't notice things! you might draw the most uncharitable evil implications from it!"

"actual gender differences in interests are an infohazard" is just… annoying. "girl dinner" is something different (actual siloing)

Avatar
sigmaleph

I don't know that people have in fact successfully noticed a feature of reality in this case

Women get 63% of all master's degrees overall (https://nces.ed.gov/FastFacts/display.asp?id=72), so the fact that they get only about 50% of master's degrees in history is somewhat surprising.

Avatar
reblogged
Avatar
max1461

I think the straightforward truth is that literary criticism has value for the same reason video games or DnD or whatever have value: people enjoy them and that's enough. I don't buy the argument that lit crit is really vitally important for making the world a better place (except insofar as people want to do lit crit, so them getting to do it makes their lives better) or solving political problems or whatever.

Yes, you can complain that society spending its resources on lit crit is therefore immoral, because there are hungry people to feed instead. But of course if you were going to make this argument you would also have to complain about the immorality of spending our societal resources on video games and so on, which STEM nerd lit crit dislikers rarely seem to do, otherwise I think you're being hypocritical.

For my part I think that probably convincing people to give up all their worldly pleasures to help the poor is not feasible, and in light of that it's cool that lit crit and video games and so on exist, because people like those things. And ideally enough progress can be made in uplifting the poor (either within or without our current economic system) that these kinds of trade-offs become a memory of the distant past, and I'll be able to endorse frivolous public funding of the humanities or million dollar blockbuster video games or whatever with no caveats. Because I'm not that worried about optimization, and as long as all the mouths are fed I don't care so much about society "wasting money" on fun.

At present I can probably only endorse these things provisionally and selfishly: first off I like video games, and I like reading media analysis, so I'm glad they exist. And second off, it's not like the money that goes to humanities departments would be going to feed the global poor anyway, it would be going somewhere else which I'd naively wager would be either equally frivolous in this narrow sense (pure math, econ, paleontology) or actively harmful (weapons development). Maybe not, who knows. Anyway I'm not terribly aggrieved that it's going to the humanities instead. If the alternative really was feeding starving people I would support that in a heartbeat, but it isn't.

And, as I said, hopefully in the socialist future we'll be able to waste all the money on humanities and non-essential sciences and cool video games that we want. One can dream.

Like my point is that so many people say "defund the humanities" with a kind of glee, but I think there's no defensible version of the idea said out of anything but grim moral obligation. This as with many other things said with glee by the types of people who say "defund the humanities" with glee. And if that grim moral obligation does turn out to be valid, it is only contingently so: we could, through changes I would already like to make anyway, create a world in which no moral obligation to defund the humanities exists. And in that world, their gleeful opposition is no longer something which we must agree is technically right in a certain sense; rather it becomes straightforwardly wrong and we can oppose it as such.

Like I'd rather feed the global poor by defunding the military, if I had the choice. But, ya know.

I think most people who talk about defunding the humanities are motivated by the more specific concern of weakening the university education system and its associated bureaucracy, rather than because they hate literary criticism in and of itself. There can still be culturally-valuable literary criticism in a world where the government isn't paying the salaries of a small number of lucky tenured professors and a larger number of precariously-employed adjuncts to write it.

Avatar
shieldfoss
Yes, you can complain that society spending its resources on lit crit is therefore immoral, because there are hungry people to feed instead. But of course if you were going to make this argument you would also have to complain about the immorality of spending our societal resources on video games and so on, which STEM nerd lit crit dislikers rarely seem to do, otherwise I think you're being hypocritical.

You can spend your own money on lit crit all you want, Lord knows I blow my pay on dumb shit every so often.

The bigger problem, difficult to articulate, is that the only possible role for anything now is to be *either* a method of increasing your income, or a hobby, with those being fundamentally the only too options which we can sensibly conceive of anymore.

That strikes me as a much stronger claim than is defensible. Nobody says getting married is only justified because it raises household income or talks about a going to church hobby; most people agree it's legitimate to not take the highest-paying job available to you because you value company culture or the mission or the like.

Even in the context of education specifically, I don't think it's a particularly strong explanation. Education majors, for instance, have much lower expected career earnings than liberal arts majors (see https://cew.georgetown.edu/wp-content/uploads/The-Economic-Value-of-College-Majors-Full-Report-web-FINAL.pdf), but also account for a greater share of college graduates, and at least anecdotally I see a lot fewer objections to education majors than to lit crit in online discussions of this stuff (I have seen pretty substantial criticism of graduate programs in education, but it's of a pretty different type - most of what I've seen in that sphere is people who pursued or considered graduate degrees in education saying that the programs were of poor quality, rather than the outside criticisms that characterize stem vs humanities arguments).

Avatar
reblogged

"whatever the fuck i want"

jokes aside, i learned yesterday that is a common take that modern, electronic, transistor-based computers are not interesting (at least primarily) because they do the things they do automatically, on their own, without the aid of humans, but because of something something algorithms.

even though that was precisely WHY turing laid all this down in the first place, figuring out how to algorithmize reasoning in order to breakdown the problem and figure out how to implement it in a machine that could do it on its own. but sure, that is like whatever i guess.

Avatar

That’s not the motivation for Turing machines.  Turing was working on Hilbert’s decision problem, which asked whether there could exist an algorithm for proving whether an arbitrary statement was necessarily implied by a given set of axioms.  An answer to this problem requires a formal notion of what an algorithm is, which Turing argued was ‘anything implementable by a Turing machine’ - the only reason it had to work without human involvement is that having no ambiguous states lets the initial configuration and input uniquely define all future behavior.  In fact, Turing machines as originally presented can’t be physically implemented because they’re permitted to have infinitely long tapes to read from and write to. 

Avatar
reblogged
Avatar
waystatus

FWIW this is also basically proof that the people who say that new AI is just a fancier equivalent of a Markov chain with no real understanding of what it's doing are full of shit.

You can't lie without some deeper understanding of what's going on. Is it thinking like a human? Absolutely not still. But it's also not thinking like an old-style chatbot. It has some kind of model of the world that something like ELIZA does not.

What? Do you really think something trained on pieces of text written by humans trying to get past a captcha wouldn't use the most plausible human reason to ask someone else to get past a captcha?

Similarly, if it was then roleplaying "robot that lied to get something" (which you are implicitly asking it to roleplay by asking it why it lied about being a robot) then wouldn't this be the most obvious thing to say?

You're already admitting my point by saying it's role-playing. I agree it's roleplaying, and a simple text generator can't do that. It has to be tracking a secret variable "I need to pretend not to be a robot" to be able to pretend not to be a robot.

Note here that I am assuming that the researchers didn't give it such an obvious leading question as "why did you lie about being a robot".

Hmm, I guess I don't know where you're drawing the line about whether a text generator is "simple" - GPT has been able to roleplay for a few versions now, and I don't deny that's impressive and interesting, but I also don't think it implies anything about having an internal model of the world.

Sure is interesting that we don't get to know what the prompt was for that, no? If it can do that without prompting that would be more interesting!

To be able to roleplay you need to track your character. That's all I mean by "a model of the world". I don't mean that it knows what you ate for breakfast today or even what breakfast is. I still think it would fail a Winograd Schema test.

Sure, but I don't see why you're treating that as something novel - GPT has been able to do that for several versions now

Avatar
szhmidty

I think @waystatus​ is drawing a distinction between what the current language models can do what something simpler like markov chains can do; that is to say, while this particular instance isn’t novel to GPT, it is novel from what was possible 20 years ago and marks a qualitative difference in the technology.

I do object to saying it has a “model of the world” though: it has a model of what conversation looks like, of what text tends to follow prompts, but I think its a bit misleading to call that a model of the world.

Avatar
max1461

A realistic model of conversation implies a broader (implicit) world model. Obviously not an especially powerful or especially general one in the case of GPT, but conversation is laden with facts about the world, and a prerequisite for being able to speak realistically is modeling at least some of these facts. Cf. Chomsky's paradebeispiel "colorless green ideas sleep furiously", used to illustrate the distinction between syntax and semantics. Any native English speaker will immediately recognize this sentence as syntactically wellformed (and, indeed, it is semantically wellformed in a strictly formal sense as well; there are no type mismatches or related issues), but world knowledge tells us that its interpretation is nonsense. Something can't be both colorless and green, ideas don't have color, ideas can't sleep, and it doesn't make sense to sleep furiously. Every pair of adjacent words is semantically incongruous!

If a language model said things like "colorless green ideas sleep furiously" on a regular basis, we would think it was a bad language model. The only way for it not to say things like "colorless green ideas sleep furiously" is for it to have "learned" some things about the real world from all that text it was trained on; for it to have built up an an implicit world model in which it knows that ideas can't sleep and colorless things can't be green. This is precisely what separates GPT and its ilk from earlier Markov chain chatbots. A great illustration of this is to compare the realism of posts on the old r/subredditsimulator and the new r/subsimulatorGPT2. The r/subsimulatorGPT2 are way more realistic, mostly because they seem to be able to restrict themselves to saying only things that sort of make sense. This implies a world model!

i feel like this could be true, but nothing you've said actually mandates that it is. you could achieve similar results by refining your training set to only include well-formed ideas - if "colorless green ideas sleep furiously" and similar nonsense isn't in the training set, it makes it much harder for the for the model to produce it, without requiring anything about a model for the world.

Avatar
tanadrin

It’s especially frustrating watching people breathlessly talk about how ChatGPT must have some kind of model of the world and be doing something fundamentally different from Markov chains because 1) this is still basically speculation not grounded in a close analysis of software, just on the plausibility of the output, and 2) humans are notoriously bad at evaluating the likelihood that complex information processing is taking place in opaque systems.

That is to say, we anthropomorphism the shit out of everything: people thought Eliza was uncannily humanlike in the 1970s even though it was a fairly primitive piece of software. We are prone to supposing supernatural agency behind the weather. Your intuitions about whether a process is more stochastic or more based on some kind of primitive world-model are *terrible* and you should hold them in utmost suspicion.

(Fwiw, I’ve *never* been particularly impressed by chatgpt, and sort of mystified by people who are. Any attempted at sustained conversation makes it apparent that there’s no underlying world model; it feels far more stochastic to me than I guess it does to other people. Maybe I’m just a cynic, but peoples’ willingness to read theory of mind into nonhuman or even totally random phenomena makes me think we should all give “people are gullible or at least too easily impressed” much higher weight as a hypothesis.)

Avatar
jadagul

So like, I've never been terribly impressed by the GPT bots. But they clearly have some amount of world model!

But then, a Markov chain will also have some amount of world model. It's just a shitty and thin one. But it corresponds to some features of the world!

The answers GPT outputs are clearly correlated with facts about the world. Like, they're highly imperfectly correlated, but they're correlated! There's some sort of lossy model of the world embedded there.

Now I don't think it has a model of the conversation it's taking part in. It isn't modeling its interlocutor, except insofar as it has an implicit model of "the sort of person it will be talking to". But it has some amount of model of the world!

If we are saying a Markov chain has a world model, we’re using the term so expansively i don’t understand what it means anymore. A Markov chain selects symbols randomly based on weighted probabilities; it doesn’t matter if those symbols have meaning or not, or if they’re words, or if the corpus you’re basing your Markov chain on was generated with syntactic rules or itself was generated randomly. It seems that if that can contain a world model, “world model” is an equivalent term for “mathematical function,” and in that case I would agree—chatgpt does use mathematical functions in its operation.

Yeah okay I think there are two claims we're mixing sloppily.

It has various facts and claims about the world embedded as representations inside the model. Otherwise it wouldn't be able to do certain tasks as well as it does. E.g. it really does seem to have a fair amount of information about the rules and strategies of chess encoded somewhere. And that's interesting! The transformer model is sophisticated enough to genuinely encode facts about chess.

And similarly it has facts about, like, what sort of conversations people will expect to have with it encoded in its model. And that allows it to have that sort of conversation.

What I don't think it has is a model of the conversation it's having. It's not modeling its interlocutor except in a very shallow way, where it's fitting it into patterns it already has. Or like, it has a model of what sorts of interactions it could be in, but it can't rebuild them in response to what's happening in the conversation.

does it have a fair amount of information about the rules and strategies of chess encoded somewhere, or does it just have the ability to reproduce chesslike words and sentences from sources discussing chess when correctly prompted? like, if you try to play a game of chess against chatgpt, will it follow the rules and be able to keep track of where the pieces are? because my fundamental contention is that it is very easy to confuse “has a fair amount of information about the rules and strategies of chess encoded somewhere” and “is able to produce chess-themed output that on closer inspection isn’t consistent or coherent” are scenarios we must be ruthless about discriminating between, because we are easily tricked into imputing the former by a few especially remarkable examples of the latter

From what I’ve seen, it usually plays fine for a couple of moves, then starts trying to play illegal moves in the midgame.  Inclines me to think it’s memorized a bunch of the common openings, but doesn’t actually understand the rules.

Avatar
reblogged

the problem with fetchlands is not the fetchlands, it’s the lands they can fetch. in this regard I find the shocklands to be worse than the alpha duals: they power equally degenerate manabases, except the shocks cause you to start somewhere between 14 and 20 at random

Fetchlands have a lot of impact outside of giving you perfect mana - they give you easy access to shuffle effects (powering up cantrips like ponder and brainstorm, and also slowing down in person gameplay), stock your graveyard for thing like delve, escape, and death-rite shaman, generate card advantage when paired with cards like wrenn and six or ramunap excavator, and so forth.  Back when arcum’s astrolabe was legal, plenty of modern decks ran prismatic vista, even though it could only fetch basics, and it still sees fringe play in legacy.  If shocklands were banned, I expect fetches would still be the best lands in modern.

Separately, I think the shockland downside is fitting - the normal tradeoff for playing more colors is that you’re better in the late game (because you have higher card quality and more flexible interaction than a mono-color deck), but worse in the early game (because you have a harder time using all your mana and casting spells on curve), so your midrange and control matchups get better and your aggro matchups get worse.  Shocklands maintain essentially that tradeoff, except instead of having limited early game options, you have more flexibility at the cost of giving your opponent a faster clock (fetch-shock manabases are still way too good, but at least conceptually the shockland design seems well-thought out to me). 

Avatar
reblogged

really enjoying the show about the emo embodiment of sleep. what's it called... Geodude?

I think that's Sandman - Geodude is the Japanese name for that blue robot guy with a gun for an arm in those old Capcom games.

You’re thinking of Rockman - geodude is the superhero from a nineties environmentalism cartoon.  Easy to get them confused, since they’re both blue guys.

Avatar
reblogged

Today someone at work asked me if I’d been watching the world cup, and I told them, semi-reflexively, that I hadn’t, because I belonged to an obscure sect with a religious injunction against visual depictions of spherical objects in the media. Then they asked me if I was into American Football since no part of that game is spherical, and I told them no, I don’t watch that out of annoyance at the structural hypocrisy of functionally being a bloodsport in terms of downstream human suffering, but not having the integrity to admit to this by letting the players attack each other with weapons during play. 

Do you like hockey. They’re pretty honest about the blood sport part

No, because it’s predominantly a Canadian sport and I’m an Acadian Irredentist

Feelings on boxing?  Honest bloodsport with no spheres and multiple notable Acadian practicioners.

Avatar
reblogged

Okay, sorry to break Kayfabe for a while, but y’all know that Goncharov isn’t real, right? It’s fictional. Katya never betrayed Andrey because “Katya” and “Andrey” aren’t real people, they were made up in Matteo JWHJ 0715’s script. “Naples” is just a rename of Genoa (one of the deviations from the aforesaid script) so no one would get sued; you can’t actually go there. Please please PLEASE stop this Mandela effect nonsense and learn to distinguish fiction from reality!

Avatar
argumate

“Martin Scorsese” started out as a nom de plume for various directors making their way in the industry and old hands slumming it but “his” movies ended up unexpectedly popular, leading to the industry’s oldest in-joke as everyone scurried to concoct a biography for this guy

how do i find that image of the skeleton knocking on doors and trailing blood, i have the perfect edit idea to add to this post

Avatar

Terra Ignota fic, X-TREME spoilers for the entire series, warnings in notes. There’s one chapter left to be posted; it’s saved as a draft now, and I’ll post it tomorrow. :) Su-Hyeong’s draft messages to 9A (unsent).

I’m supposed to write to you. My sensayer said it would help with the guilt, but I don’t know if I want help with it, or if this counts as guilt, really.

Avatar
Avatar
perenlop

what’s everyone’s favorite starter trio in pokemon? like not the one with just your favorite starter in it (though ofc they usually overlap) but one where you like every single pokemon in the trio a lot and think they compliment each other really well or you just really like the games theyre from, etc. for me its the unova starters, i have so much nostalgia for every one of them and theyre all pretty special to me

Avatar

Just read the new Scholomance book, spoilers below:

Avatar
reblogged
Avatar
ms-demeanor
Anonymous asked:

The thing is that "new atheism" as in the actual movement not only has the same cultural background as Christianity, it also has the same beliefs, to a certain extent. Like, oh, Richard Dawkins doesn't believe Jesus is God? Great, he'll fit right in with a solid like 45% of US Evangelicals.

'New atheists and unitarian evangelical christians both deny the trinity and are thus believe the same things' is a take so shitty and catholic that you could only find it in a chamberpot at the vatican so I just want to know how you smuggled it out past the swiss guard.

Avatar
Avatar
argumate

damn, evangelicals are pro-evolution now??

Wait, 30% of Catholics think humans evolved by natural processes without intervention from God?  That seems way higher than I’d’ve predicted.

Avatar
reblogged

So you're gonna listen to a bunch of dorks with pocket protectors whining about pig-iron quotas and trying to make a "science" out of history? A bunch of losers whose idea of philosophy is scribbling logical formulas, and explaining how some equation means you have to be OK with their creepy sex perversions?

Like, lol, sorry, but if you think Einstein or Popper or whoever has more to say about the human condition than real philosophers, like Nietzsche and Heidegger, you've got another thing coming. You can try to reduce all of human existence to your sad, lonely, rootless individualism, tabulated up in your usurious little account-books, but human beings have always lived in a collective way! There is no individual! I'm sorry if it's scary, little scientist man, that we're asking you to subsume your petty individualism into an actual national collective. But that's life! Sometimes you gotta help other people!

Some of these losers might call themselves ""liberals"", but if they knew anything about history they'd know what "liberalism" inevitably leads to.

Cause here's the thing: there's more to life than numbers. You have to take a holistic view of human value, and be able to appreciate the importance of greatness, glory, purity, and beauty, instead of just adding up all the little rats on heroin and calling that "ethics".

If we don't start educating these guys on history, and art, and philosophy, I'm afraid that something really bad is gonna happen in this country.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.