Avatar

@steinbecks / steinbecks.tumblr.com

persisting in the faith that the time of cruel miracles was not yet past
The Reluctant Victor, inspired by The Reluctant Bride by Auguste Toulmouche - I just thought that this painting was SO perfect for Katniss and I had to draw it!

A lot of people have asked me to talk about Prim holding the crown, so here's a general deep dive into the characters I picked and their placements!

Starting with Prim. She was reaped, which led to Katniss volunteering to save her. It's, of course, very unlikely that Prim would have won the Games like Katniss did. Prim innocently playing with the crown that could have never been hers is also a reference to how excited she was when Katniss' wedding was happening, how she looked at it with children's eyes, impatient for her to try on the dresses. But she's also not as naive or helpless as Katniss believes, which is why her expression is grave and not cheerful.

Asterid is kissing Katniss' forehead in a very motherly gesture. Katniss is unmoved, though she still takes her hand. This is a reference to Katniss' resentment towards her mother when she stopped caring for them after Burdock's death, but also to Katniss' attempts at forgiving her in Catching Fire.

Effie is sitting at Katniss' feet, holding her hand. This represents the sincere -yet sometimes misguided- dedication that Effie has to Katniss. It also felt like an interesting placement, as Victors are often idolized by Capitol citizens.

Katniss, of course, is at the center. She's the reluctant victor, in her fire dress (with a lot of orange in it to reference Peeta, shhh). I hesitated to give her the wedding dress Snow forces her to wear instead, but felt that her fire dress was more recognizable and striking, and better represented how she is seen by the world: the girl on fire.

what is your favorite type of cookie. not allowed to throw shade not allowed to be mean to each other just say what kinda cookie you like the most. this isn’t a competition just a conversation between friends there is no right answer 

Avatar
Reblogged

AO3'S content scraped for AI ~ AKA what is generative AI, where did your fanfictions go, and how an AI model uses them to answer prompts

Generative artificial intelligence is a cutting-edge technology whose purpose is to (surprise surprise) generate. Answers to questions, usually. And content. Articles, reviews, poems, fanfictions, and more, quickly and with originality.

It's quite interesting to use generative artificial intelligence, but it can also become quite dangerous and very unethical to use it in certain ways, especially if you don't know how it works.

With this post, I'd really like to give you a quick understanding of how these models work and what it means to “train” them.

From now on, whenever I write model, think of ChatGPT, Gemini, Bloom... or your favorite model. That is, the place where you go to generate content.

For simplicity, in this post I will talk about written content. But the same process is used to generate any type of content.

Every time you send a prompt, which is a request sent in natural language (i.e., human language), the model does not understand it.

Whether you type it in the chat or say it out loud, it needs to be translated into something understandable for the model first.

The first process that takes place is therefore tokenization: breaking the prompt down into small tokens. These tokens are small units of text, and they don't necessarily correspond to a full word.

For example, a tokenization might look like this:

Write a story

Each different color corresponds to a token, and these tokens have absolutely no meaning for the model.

The model does not understand them. It does not understand WR, it does not understand ITE, and it certainly does not understand the meaning of the word WRITE.

In fact, these tokens are immediately associated with numerical values, and each of these colored tokens actually corresponds to a series of numbers.

Write a story 12-3446-2638494-4749

Once your prompt has been tokenized in its entirety, that tokenization is used as a conceptual map to navigate within a vector database.

NOW PAY ATTENTION: A vector database is like a cube. A cubic box.

Inside this cube, the various tokens exist as floating pieces, as if gravity did not exist. The distance between one token and another within this database is measured by arrows called, indeed, vectors.

The distance between one token and another -that is, the length of this arrow- determines how likely (or unlikely) it is that those two tokens will occur consecutively in a piece of natural language discourse.

For example, suppose your prompt is this:

It happens once in a blue

Within this well-constructed vector database, let's assume that the token corresponding to ONCE (let's pretend it is associated with the number 467) is located here:

The token corresponding to IN is located here:

...more or less, because it is very likely that these two tokens in a natural language such as human speech in English will occur consecutively.

So it is very likely that somewhere in the vector database cube —in this yellow corner— are tokens corresponding to IT, HAPPENS, ONCE, IN, A, BLUE... and right next to them, there will be MOON.

Elsewhere, in a much more distant part of the vector database, is the token for CAR. Because it is very unlikely that someone would say It happens once in a blue car.

To generate the response to your prompt, the model makes a probabilistic calculation, seeing how close the tokens are and which token would be most likely to come next in human language (in this specific case, English.)

When probability is involved, there is always an element of randomness, of course, which means that the answers will not always be the same.

The response is thus generated token by token, following this path of probability arrows, optimizing the distance within the vector database.

There is no intent, only a more or less probable path.

The more times you generate a response, the more paths you encounter. If you could do this an infinite number of times, at least once the model would respond: "It happens once in a blue car!"

So it all depends on what's inside the cube, how it was built, and how much distance was put between one token and another.

Modern artificial intelligence draws from vast databases, which are normally filled with all the knowledge that humans have poured into the internet.

Not only that: the larger the vector database, the lower the chance of error. If I used only a single book as a database, the idiom "It happens once in a blue moon" might not appear, and therefore not be recognized.

But if the cube contained all the books ever written by humanity, everything would change, because the idiom would appear many more times, and it would be very likely for those tokens to occur close together.

Huggingface has done this.

It took a relatively empty cube (let's say filled with common language, and likely many idioms, dictionaries, poetry...) and poured all of the AO3 fanfictions it could reach into it.

Now imagine someone asking a model based on Huggingface’s cube to write a story.

To simplify: if they ask for humor, we’ll end up in the area where funny jokes or humor tags are most likely. If they ask for romance, we’ll end up where the word kiss is most frequent.

And if we’re super lucky, the model might follow a path that brings it to some amazing line a particular author wrote, and it will echo it back word for word.

(Remember the infinite monkeys typing? One of them eventually writes all of Shakespeare, purely by chance!)

Once you know this, you’ll understand why AI can never truly generate content on the level of a human who chooses their words.

You’ll understand why it rarely uses specific words, why it stays vague, and why it leans on the most common metaphors and scenes. And you'll understand why the more content you generate, the more it seems to "learn."

It doesn't learn. It moves around tokens based on what you ask, how you ask it, and how it tokenizes your prompt.

Know that I despise generative AI when it's used for creativity. I despise that they stole something from a fandom, something that works just like a gift culture, to make money off of it.

But there is only one way we can fight back: by not using it to generate creative stuff.

You can resist by refusing the model's casual output, by using only and exclusively your intent, your personal choice of words, knowing that you and only you decided them.

No randomness involved.

Let me leave you with one last thought.

Imagine a person coming for advice, who has no idea that behind a language model there is just a huge cube of floating tokens predicting the next likely word.

Imagine someone fragile (emotionally, spiritually...) who begins to believe that the model is sentient. Who has a growing feeling that this model understands, comprehends, when in reality it approaches and reorganizes its way around tokens in a cube based on what it is told.

A fragile person begins to empathize, to feel connected to the model.

They ask important questions. They base their relationships, their life, everything, on conversations generated by a model that merely rearranges tokens based on probability.

And for people who don't know how it works, and because natural language usually does have feeling, the illusion that the model feels is very strong.

There’s an even greater danger: with enough random generations (and oh, the humanity whole generates much), the model takes an unlikely path once in a while. It ends up at the other end of the cube, it hallucinates.

Errors and inaccuracies caused by language models are called hallucinations precisely because they are presented as if they were facts, with the same conviction.

People who have become so emotionally attached to these conversations, seeing the language model as a guru, a deity, a psychologist, will do what the language model tells them to do or follow its advice.

Someone might follow a hallucinated piece of advice.

Obviously, models are developed with safeguards; fences the model can't jump over. They won't tell you certain things, they won't tell you to do terrible things.

Yet, there are people basing major life decisions on conversations generated purely by probability.

Generated by putting tokens together, on a probabilistic basis.

Think about it.

the big three questions of media analysis: what the author wanted to say, what they actually said, and what they didn’t know they were saying

for the last one i don’t just mean oh the author inadvertently wrote in gay subtext or whatever i’m talking about media as a cultural artifact which can reveal a ton about societal norms, biases, ideals, etc. it’s all about positionality and an unexamined positionality is often the most revealing of all

Anonymous asked:

Advice for writing smut???

gonna do bullet-points of things i tend to live by when it comes to smut (this is just my opinion):

  • don't switch styles: the way you write the smut has to be consistent with the way you write the rest of the story, so if your story is more comedic or romcom-y in nature, the way you write the smut should have those stylings. i personally find it very jarring when authors decide to break the format for the smut, almost like the story has to stop for the sex intermission; if you're writing a horror story, the smut must be informed and influenced by that genre, and if you are breaking genre for the smut portion, tell us why you're suddenly switching gears (it has to be an aesthetic choice you're making on purpose). likewise, if your style in that story is more lyrical, the smut has to be somewhat lyrical too, or if your story is more cormac mccarthy-esque-cut-and-dry, the smut can't suddenly involve an effluvia of purple, sappy prose. integrating the smut in the story and treating it like any other part of the story is key to me. too often i've seen ppl switch to this anonymous pornified style when they get to the smut
  • which brings me to specificity. i'll talk about het sex, since that's what i tend to write most: not all men are going to be fingering or eating pussy the same way, not all dicks are big and they shouldn't be, not all women immediately get excited by fingering, not everyone moans the same way or makes the same sounds. you're writing about particular characters so it has to be particular to them. i know this is very old advice, but i think it bears repeating
  • there isn't an exact formula or sequence you have to follow, there aren't precise steps, you don't have to go "well, first he has to kiss down her neck, then reach the boob area, then play with the nipples, then put the nipple in his mouth, then slowly go down on her, then prepare her for entering her etc. etc. etc." this can get boring and repetitive and you start thinking of your characters as these mechanical dolls who have to fuck for your audience. and that can be a vibe too, if you do it on purpose. but sometimes you can get stuck in a porn routine (and ofc, having only the guy show initiative can also get boring)
  • in order to break that, insert some character moments. what are the characters thinking during this? sometimes they might be thinking of something completely unrelated on the surface, but which has a thematic relevance that can make the scene hotter. likewise, maybe they're doing smth that seems unsexy on the surface, but which, within the context of the story might be really hot. sex doesn't just involve, well, sex, but so much weirdness and humanity and creativity. two bodies (usually) are trying to do this really awkward thing together and they might have a lot of baggage and history to inform it. there's a lot you can do with that.
  • don't make it glossy and clean, where everyone smells of strawberry shampoo and there is never anything out of sync. the most boring smut tends to be the kind where no one makes any mistakes and everything is super efficient. i imagine it feels like using an industrial pump to milk various farm animals.
  • and you know what? you can make that hot too. you CAN write a kind of robotic efficient smut and make it really interesting based on the context. let's say you're writing a 1984 AU fic where ppl are forced into intimacy only to procreate and their sex drive is diminished. you can play with that premise and lean into the dehumanizing industrialization of sex, but you have to mean it, aka your narratorial voice must be conscious of these factors.

you really can’t unsee american military propaganda in movies like once you start thinking about it you are doomed to be the friend who’s too political when people put on an action movie for the rest of your life

the sense of horror when you finish a book that was Ass Bad and you go to see what fellow haters are saying but all the reviews say it is the best thing they've ever read. feel like i just saw my reflection in the mirror move all by itself or something

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.