Avatar

Queen Meve Is The Hero I Needed Since I Was Five

@straysinfiltrator / straysinfiltrator.tumblr.com

Kaley || 30+ she/her || Ao3 || Sideblogs: Blake's 7, Coldfire Trilogy
Avatar
Anonymous asked:

(woundfucker anon again) I love being in a sixty year old fandom, I really do. Where else would you get best enemies erotic brain surgery fic originally handwritten in 1998 and uploaded to Ao3 by the grandchild of the woman who wrote it? Honestly, it makes me kind of emotional, that not only has transformative fandom been here for so long, but there were people writing the fucked up shit *I* enjoy way back when too. It's really hot too!

Avatar
kadharonon

I'm going to guess this is Brain Surgery Erotica by castrovalvangrandma

(No, I have no clue what I was looking for when I ran across that.)

Hah!

I would have guessed Blake's 7, but Doctor Who makes a lot of sense.

@straysinfiltrator My determination to watch Blake's 7 just cranked up another notch because what the fuck is going on over there that would lead OTNF to guessing this fic description as that fandom XD

You know your fandom has the good shit when...

Join us and find out ;)

Avatar

I don't think that making or using AI art/image generation is morally wrong, as you guys know, but I have to admit that slotting a wibbly-lined low effort image selected from the first result set from Midjourney in your content is incredibly tacky. At least select something that looks good. Maybe something without the boring AI "sheen" look either.

You see this a lot in clickbaity content like web spam and youtube shorts attempting to algorithm game.

Some quick tips for peeps using MJ that don't want it to look like everyone else using MJ.

1. The Ultra Stylish All Dress Alike

First, adjust your style setting (--s (number)). This is how much of the Midjourney 'secret sauce' is added. The lower this number, the closer to your prompt the style will be, at a cost of coherence and 'prettiness'.

Same prompt, same seed, style 500 on the left, style 25 (my preferred setting) on the right. The differences in lighting, pose, skin reflectivity, etc are apparent. You can think of it as a "de-instagram" setting.

3. Use your Variations

"A dinosaur astronaut", original gen, vary (subtle) and Vary (strong) results

If you get something you like for an initial result, the first thing I tend to do is immediately to a Vary (Strong) on it. The results are usually better than the original one because you're essentially re-running the initial prompt with the first result as an additional image prompt, reinforcing the subject.

Using subtle variations to get closer to what you want is one of the big features of MJ, and it's sorely underused.

I recommend setting to low variation mode so your normal variations are subtle ones, and putting on remix mode. Remix mode prompts you to change the prompt every time you do a variation. If you're close but something's not right, that's an easy way to go.

Changing prompts is especially useful if combined with Vary (Region). Basically, if you like everything about a pic but some select details, you can highlight an area and have the system produce new variations just changing that region. It's obvious use is fixing generation errors, but by changing the prompt, you can get results that the robot can't imagine as a solo prompt.

3. Get Weird With It

--weird is a highly unused setting. It's also very powerful, while it goes up to 1000, I tend to not go above 1-5.

A cartoon penguin made of knives at 0 weird, 1 weird, and 50 weird:

Near as I can tell, weird restricts the 'does this make sense' checks, allowing for more out-there results, both in terms of style and subject matter. At the very low levels (1-5) it increases prompt adherence, at higher levels it reduces it substantially.

4. Prompt Big and Blend your Concepts

MJ deals with short prompts by filling in its own leanings where there's gaps, so short prompts look the most midjourney-esq while longer ones (especially when combined with --weird or a low --s value) fight back against it more.

When prompting for art styles, prompting for multiple art styles/artists at once produces weird hybrids. Prompting an artist with the wrong medium (A painting by a sculptor, a drawing by a cinematographer, etc) also can produce new, strange results.

(Lisa Frank/H.R. Giger style mashup using :: technique (below))

But the real trick is in "prompt smashing" or multi-prompting. Basically, midjourney uses :: to split prompts. Intended function is to allow you to add individual weights to each section, if you want something strongly emphasized.

But in practice, it blends the concepts of the two prompts to create a new, third thing.

As above with "an illustration of an armored dinosaur in the woods, in the style of vibrant comics, toycore, ps1 graphics, national geographic photo, majestic elephants, exotic, action painter" then "daft punk & particle party daft punk world tour, in the style of romina ressia, polished craftsmanship, minimalistic metal sculptures, ultra hd, mark seliger, installation-based, quadratura" and then the two prompts run together separated by ::.

Each one was also iterated to produce a better result than the initial gen.

5. Just edit the darn thing.

Learning to edit your works outside of the AI system will always improve your work beyond what the machine itself can do. Whether it's just the simple matter of doing color adjustment/correction:

Or more heavy compositing and re-editing combined with other techniques:

Editing and compositing your gens is always superior to just posting them raw, even if its just a little cropping to get the figure slightly off-center or compiling 20ish individual gens into a single comic-style battle scene before recoloring it from scratch.

But at the very least, before you post, go over the image and make sure there's nothing glaring.

This is really remarkable. I was literally just talking about how AI isn't good enough for comic art, and then immediately I see...a pretty good mini-comic, apparently made with AI!

(It's not the first AI-illustrated comic I've seen, but it's the first where I thought the art was any good.)

I guess @deepdreamnights is just really damn good at this.

If you (or another skilled AI wrangler) wanted to, do you think it would be feasible to make a whole long-running comic that way? If so, would it represent a significant time savings compared to just drawing the thing normally?

First off, thank you!

I can generally put together a page and a half a day this way, depending on how tricky the composition is and how devoted to it I am at the moment. Basically even-stevens with a lot of hand-drawn work, but with a mix of advantages and disadvantages.

I tend to make the process harder on myself than it needs to be.

This is partially because the datasets trend to the modern and my insistence on the look of the 60s, 70s or 80s for any given project means I'm doing a lot of reworks and tinkering. The dinosaur and battle-animal-style furry characters are harder to keep consistent than human characters, etc.

Part of it is I'm always trying to squeeze maximum results out of the tech, so every time there's a new version, my ambitions seem to inflate to match, keeping my general time-investment steady.

Human characters and more modern styles come easier to the generators, so if you're dealing with easily promptable set of character features, or ones that MJ's character reference system can make sense of, you'd probably have a much faster time producing usable raws for compositing. And fixing hairstyles and shirt-cuffs tends to be easier than fixing tusks and horns.

The areas that are difficult are things like fighting (less of a problem with stable diffusion, there's a little Cotton Mather in the major service-driven ones that imposes rules stricter than Y-7 cartoons). For Mrs. Nice kicking Wally into the time hole (as an example):

Mrs. Nice is kicking a punching bag, and Wally is slipping on the floor. The background was cobbled together from several chunks so the cave environment would be consistent.

Sometimes you have to prompt indirectly or use placeholder props to get the kinds of actions you want. It might not understand a sci-fi brainwashing chair, but it can do an electric chair made of futuristic metal. "Octopus tentacles for hair" makes hairy tentacles, so you put "tentacles coming out of her head." They aren't frozen in a block of ice, they're in a glass box that is covered in frost. That sort of thing.

It doesn't make a huge difference for my workflow, because I tend to composite characters from multiple gens in most situations. I find it, counterintuitively, makes for less stiff compositions.

The fauxtalgia-comics look, however, is just one option in hojillions. Because my techniques work on any style of art you want to use. The photoshoppery just gets more complicated for stuff that's painterly, pseudo-photocomic-y, or other.

I consider myself a nostalgia-artist (not a nostalgic artist, though I am often that. I try to make nostalgia my medium). I've got a whole deal about it, I'll get into it on its own post sometime.

But experimental and hybrid styles can produce some wild stuff, and I'm certain experimental forms are going to benefit from it.

Very interesting, thanks for the run-down.

Part of me is happy to see that, even with this incredible technology, producing work deserving of an audience still depends on skill and effort. Another part of me wishes comics were easier, though. Don't know which part's bigger.

Depends on what you mean by easier.

If your skill set is photoshop, you're essentially transferring more of the process of comic making into that skill's purview. And there's a lot free ai utilities for background removal and the like that can speed up the grunt work in that process.

If you wanted to make a comic in this way, the best thing you can do is just start making comics with this technique and then adapt the technique to what you learn and what you find works for you. If it takes longer than you expect at first that's fine. As you improve your speed will improve along with your technique.

I've even used the technique pre-generative AI by utilizing public domain comics and other PD art resources.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.