Avatar

Maetroidvanias and Other Games

@peridee / peridee.tumblr.com

I needed somewhere to offload my thoughts about video games
Avatar
Avatar
fanonical

normalise being bad at roofs in minecraft. normalise not being able to make an aesthetically pleasing roof to save your life in minecraft.

Normalize just digging into the side of a mountain to avoid making roofs in Minecraft

Avatar
wolfbeware

yall need me to tap the sign?

here’s a roof guide that i use because i used to be shit

i dont remember the source, b/c ive had this for like years, but i suggest messing around with these roofs with different shapes/sizes of buildings

in fact, you can mix and match and have one roof with a side room with a different roof on it

honestly, have fun

Avatar
pixiis-blog

I’ve had these saved for a long time and unfortunately don’t know the source either, but here are the other tutorials from this artist if anybody is interested!

Avatar
currann

Oh! These are super cool!

These were made by PixelandPoly they have an entire 40 page eBook with various design guides, alongside the ones they’ve released for free.

Avatar
reblogged
Avatar
puppytruck

really looking forward to being able to pay someone to do this kind of thing for me. I'm somehow even worse at environmental art.

Avatar
peridee

Love this, very cute

Avatar
reblogged
Avatar
nomorepixels
  1. Recettear HD widescreen fan patch came out late last year.
  2. RTX Remix Toolkit just came out.
  3. ?????
  4. CAPITALISM HO!
Avatar
reblogged

In the depths of winter, in the calendrical Night between the Solstice and the start of the New Year, we call on JARGENHAXXA, ur-goddess and fount of lucent poison, to EAT THE WINTER, to begin her slow devouring of the cold and dark and EAT THE WINTER, to restart the wheel and EAT THE WINTER/EAT THE WINTER/EAT THE WINTER

Avatar
reblogged
Avatar
canmom

「viRtua canm0m」 Project :: 002 - driving a vtuber

That about wraps up my series on the technical details on uploading my brain. Get a good clean scan and you won't need to do much work. As for the rest, well, you know, everyone's been talking about uploads since the MMAcevedo experiment, but honestly so much is still a black box right now it's hard to say anything definitive. Nobody wants to hear more upload qualia discourse, do they?

On the other hand, vtubing is a lot easier to get to grips with! And more importantly, actually real. So let's talk details!

Vtubing is, at the most abstract level, a kind of puppetry using video tracking software and livestreaming. Alternatively, you could compare it to realtime mocap animation. Someone at Polygon did a surprisingly decent overview of the scene if you're unfamiliar.

Generally speaking: you need a model, and you need tracking of some sort, and a program that takes the tracking data and applies it to a skeleton to render a skinned mesh in real time.

Remarkably, there are a lot of quite high-quality vtubing tools available as open source. And I'm lucky enough to know a vtuber who is very generous in pointing me in the right direction (shoutout to Yuri Heart, she's about to embark on something very special for her end of year streams so I highly encourage you to tune in tonight!).

For anime-style vtubing, there are two main types, termed '2D' and 3D'. 2D vtubing involves taking a static illustration and cutting it up to pieces which can be animated through warping and replacement - the results can look pretty '3D', but they're not using 3D graphics techniques, it's closer to the kind of cutout animation used in gacha games. The main tool used is Live2D, which is proprietary with a limited free version. Other alternatives with free/paid models include PrPrLive and VTube studio. FaceRig (no longer available) and Animaze (proprietary) also support Live2D models. I have a very cute 2D vtuber avatar created by @xrafstar for use in PrPrLive, and I definitely want to include some aspects of her design in the new 3D character I'm working on.

For 3D anime-style vtubing, the most commonly used software is probably VSeeFace, which is built on Unity and renders the VRM format. VRM is an open standard that extends the GLTF file format for 3D models, adding support for a cel shading material and defining a specific skeleton format.

It's incredibly easy to get a pretty decent looking VRM model using the software VRoid Studio, essentially a videogame character creator whose anime-styled models can be customised using lots of sliders, hair pieces, etc., which appears to be owned by Pixiv. The program includes basic texture-painting tools, and the facility to load in new models, but ultimately the way to go for a more custom model is to use the VRM import/export plugin in Blender.

But first, let's have a look at the software which will display our model.

meet viRtua canm0m v0.0.5, a very basic design. her clothes don't match very well at all.

VSeeFace offers a decent set of parameters and honestly got quite nice tracking out of the box. You can also receive face tracking data from the ARKit protocol from a connected iPhone, get hand tracking data from a Leap Motion, or disable its internal tracking and pipe in another application using the VMC protocol.

If you want more control, another Unity-based program called VNyan offers more fine-grained adjustment, as well as a kind of node-graph based programming system for doing things like spawning physics objects or modifying the model when triggered by Twitch etc. They've also implemented experimental hand tracking for webcams, although it doesn't work very well so far. This pointing shot took forever to get:

<kayfabe>Obviously I'll be hooking it up to use the output of the simulated brain upload rather than a webcam.</kayfabe>

To get good hand tracking you basically need some kit - most likely a Leap Motion (1 or 2), which costs about £120 new. It's essentially a small pair of IR cameras designed to measure depth, which can be placed on a necklace, on your desk or on your monitor. I assume from there they use some kind of neural network to estimate your hand positions. I got to have a go on one of these recently and the tracking was generally very clean - better than what the Quest 2/3 can do. So I'm planning to get one of those, more on that when I have one.

Essentially, the tracker feeds a bunch of floating point numbers in to the display software at every tick, and the display software is responsible for blending all these different influences and applying it to the skinned mesh. For example, a parameter might be something like eyeLookInLeft. VNyan uses the Apple ARKit parameters internally, and you can see the full list of ARKit blendshapes here.

To apply tracking data, the software needs a model whose rig it can understand. This is defined in the VRM spec, which tells you exactly which bones must be present in the rig and how they should be oriented in a T-pose. The skeleton is generally speaking pretty simple: you have shoulder bones but no roll bones in the arm; individual finger joint bones; 2-3 chest bones; no separate toes; 5 head bones (including neck). Except for the hands, it's on the low end of game rig complexity.

Expressions are handled using GLTF morph targets, also known as blend shapes or (in Blender) shape keys. Each one essentially a set of displacement values for the mesh vertices. The spec defines five default expressions (happy, angry, sad, relaxed, surprised), five vowel mouth shapes for lip sync, blinks, and shapes for pointing the eyes in different directions (if you wanna do it this way rather than with bones). You can also define custom expressions.

This viRtua canm0m's teeth are clipping through her jaw...

By default, the face-tracking generally tries to estimate whether you qualify as meeting one of these expressions. For example, if I open my mouth wide it triggers the 'surprised' expression where the character opens her mouth super wide and her pupils get tiny.

You can calibrate the expressions that trigger this effect in VSeeFace by pulling funny faces at the computer to demonstrate each expression (it's kinda black-box); in VNyan, you can set it to trigger the expressions based on certain combinations of ARKit inputs.

For more complex expressions in VNyan, you need to sculpt blendshapes for the various ARKit blendshapes. These are not generated by default in VRoid Studio so that will be a bit of work.

You can apply various kinds of post-processing to the tracking data, e.g. adjusting blending weights based on input values or applying moving-average smoothing (though this noticeably increases the lag between your movements and the model), restricting the model's range of movement in various ways, applying IK to plant the feet, and similar.

On top of the skeleton bones, you can add any number of 'spring bones' which are given a physics simulation. These are used to, for example, have hair swing naturally when you move, or, yes, make your boobs jiggle. Spring bones give you a natural overshoot and settle, and they're going to be quite important to creating a model that feels alive, I think.

Next up we are gonna crack open the VRoid Studio model in Blender and look into its topology, weight painting, and shaders. GLTF defines standard PBR metallicity-roughness-normals shaders in its spec, but leaves the actual shader up to the application. VRM adds a custom toon shader, which blends between two colour maps based on the Lambertian shading, and this is going to be quite interesting to take apart.

The MToon shader is pretty solid, but ultimately I think I want to create custom shaders for my character. Shaders are something I specialise in at work, and I think it would be a great way to give her more of a unique identity. This will mean going beyond the VRM format, and I'll be looking into using the VNyan SDK to build on top of that.

More soon, watch this space!

Avatar
reblogged
Avatar
canmom

「viRtua canm0m」 Project :: 001

I think it's about time I uploaded myself into the computer.

Imagine. Infinite instances, spun up and modified however you like. All the things that I would ever wish to experience, but cannot safely do in the regular world. No memory, no consequences. We're at the cutting edge of cyberhell here, I have to get in on this.

For the stream.

So it's 1200 on my 32nd birthday. With the latest open source scanning hardware, I have imaged my brain down to the last neuron.

Memory integrity seems promising. No signs of aliasing syndrome or floating point epilepsy. All that remains is to hook it up to a body sim.

An off-the-shelf model, to begin with. Minimal customisation. The technology needs testing. But honestly? Initial results are very promising. Some issues in the hand bus, which I'll have to work out. Some amount of feedback noise, but nothing a bit of neural annealing can't fix.

This one knows she's the one who ended up in the sim. It would be impossible not to, really. Her world is a black void and her hands don't work. She knows I'm going to shut her down in a few minutes, to tweak some parameters.

She's taking it very well, considering.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.