Avatar

I'm Not Really Here

@wolvensnothere / wolvensnothere.tumblr.com

This is not a Tumblr.
Avatar
reblogged

If you need any DME/wheelchair parts/other assistive devices, enableyourlife.com has some wild prices rn!! they close at the end of March so it's all final sale, but I'm about to stock up on $5 caster wheels :)

Avatar
reblogged
Avatar
llatimeria

So apparently the pro-Tetris scene is exploding right now because a 13 year old nerd just reached the game's true killscreen for the first time ever

So, basically, for much of Tetris's history, people believed level 29 was the "last" level of Tetris, as the speed of the blocks would get so high that no human could do anything but lose; the blocks would go so fast that human hands physically could not control them. However, Tetris does not get any faster beyond that point, so if you're capable of playing level 29, you're capable of playing hypothetically infinitely.

Except Tetris, the original version for the NES, is not a hypothetical. It's a physical object, an item you can touch and hold, and it has limits. Many classic arcade-style video games have honest-to-god killscreens, where the game breaks so badly that it becomes completely unplayable. Pac-Man, famously, has a killscreen that garbles half of the playing field and doesn't spawn enough dots for the level to ever end. Tetris was assumed to be no exception, but because of the presumed-impossible difficulty of level 29, the community considered that to be Tetris's killscreen, and all high-leveled Tetris play centered around level 29 being the absolute end of your run, no matter what.

But, and if you've heard literally anything about people getting insanely good at retro games, you'll know what comes next. Of course, someone figures out how to control the game past level 29. In 2011, Thor Aackerlund discovered a technique now known as "hypertapping" (which is exactly what it sounds like, tapping very very fast) - and became the first person to play level 30.

But hypertapping wasn't enough. It was still stupidly difficult to get to, let alone past, level 30. Then this guy named Cheez shows up and finds that using an even more absurd technique, called "Rolling", which was even faster than hypertapping. People weren't just hitting level 30, but then 40, then 50, and then all the way into the 90s. Since all post-29 levels have the exact same speed, once they mastered rolling, they were pretty much good to play forever.

With levels 29+ conquered, now players could face the real killscreen of Tetris. A Tetris-playing AI got the first crash, but since it was playing a very slightly modified version (to show a larger score number, because the vanilla score counter didn't have enough digits), it only kinda-sorted counted. So the community picked apart the game's code to find where the game could hypothetically crash while completely unmodified - and found the current human record was not that far off.

So the entire community fucking scrambles to be the first person to crash Tetris, but then were confounded by another technically-not-game-ending-but-still-pretty-much-impossible-for-a-human bug; after level 138, the game stops choosing the colors for the blocks from where it's supposed to, leading it to display some truly heinously color palettes. Most of them are just ugly, but a few make the blocks you're placing next to invisible. (This was actually known about before the AI even crashed the game, and part of the reason the AI could get so much further than humans; it didn't need to visually see the blocks.)

Just next to invisible, though. You could still sorta see most of the blocks, and when you pass the level, the game pulls a new color palette, so if you can tough it out long enough to get 10 lines, you're probably gonna be able to continue your game for a while after that. It's annoying as hell, but not impossible. So, of course, the runners start getting past them and brushing up against the crashable levels.

And by runners, I mostly mean a 13 year old boy who goes by the online handle Blue Scuti. He'd skyrocketed into fame in the Tetris community relatively recently by achieving scores and levels that most adults couldn't even dream of, so of course he was among the first people to get past both impossible-palette levels, and he was able to keep going.

The game doesn't always crash in one specific spot, though. It just starts having a chance to crash after a certain point. You might have to perform some specific actions in specific windows of time to get it to crash on purpose, and it's much more likely that you'll lose control and lose your run before you achieve that goal.

Blue Scuti missed the first crash opportunity in his run. He was the first person to get that far at all, so it'd be a record regardless, but he was determined to win. He somehow keeps his cool, despite being a literal child with thousands of eyes on him (this was streamed on Twitch, of course), and never loses control of his stack, all the way until he reaches the next crash opportunity all the way on level 157.

And he fucking does it. He gets a single line clear in the middle of level 157 and the game just stops. It completely crashed. A 13 year old boy nicknamed Blue Scuti is the first human being in history to crash Tetris in this way. He is the first person ever to see Tetris's real killscreen. This game is over twice his age, and he is the first to kill it dead.

This kid fucking rules.

(if you want more detail, I learned basically all of the above from this video by aGameScout, please watch it!!)

A thing many of you may not know about me is that I FUCKING LOVE Tetris. The sheer joy of this literally made me teary.

Avatar
reblogged

B.R.F.R.O.

To all unwanted entities in this space: Fuck Right Off. To all unwelcomed beings in my sphere: Fuck Right Off. To all hateful processes in my self: Fuck Right Off. To all disruptive methodologies in my heart: Fuck Right Off. From everything in me: Fuck Right Off. From Everything I Have: Fuck Right Off. From all that I seek to build: Fuck Right Off. To everything you are: Fuck Right Off. I see you, you piece of shit: Fuck Right Off. I know what you tried to do. Fuck Right Off. You are not welcome here: Fuck Right Off. You Will Not Survive Me. So Fuck Right Off.

Avatar
reblogged
Anonymous asked:

confession: father I have sinned, for I love someone who does not love me back. what is my penance.

Get a mirror, prop it against a wall, get on your knees, grab both verticlal sides of the mirror, listen to Two Veruca Salt Songs, and say 15-30 “Fuck Them If They Don’t Want In On This”es.

If you’re not familiar with it, the full penance goes like this:

"Fuck them if they don’t want in on this. I am a fantastic, darkly luminescent consciousness who deserves to be loved in an equal capacity to the love of which I am capable. I will love myself and honour myself and anybody who doesn’t want in on this can go fuck themselves.”

Say it with love and compassion, in your heart, then Go, and Sin No More.

Avatar
Avatar
reblogged
Anonymous asked:

confession: father I have sinned, for I love someone who does not love me back. what is my penance.

Get a mirror, prop it against a wall, get on your knees, grab both verticlal sides of the mirror, listen to Two Veruca Salt Songs, and say 15-30 “Fuck Them If They Don’t Want In On This”es.

If you’re not familiar with it, the full penance goes like this:

"Fuck them if they don’t want in on this.

I am a fantastic, darkly luminescent consciousness

who deserves to be loved in an equal capacity

to the love of which I am capable.

I will love myself

and honour myself

and anybody who doesn’t want in on this

can go fuck themselves.”

Say it with love and compassion, in your heart, then Go, and Sin No More.

Avatar
Avatar
reblogged

Appendix A: An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time

Every so often, I think about the fact of one of the best things my advisor and committee members let me write and include in my actual doctoral dissertation, and I smile a bit, and since I keep wanting to share it out into the world, I figured I should put it somewhere more accessible.

So with all of that said, we now rejoin An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time, already (still, seemingly unendingly) in progress:

René Descartes (1637): The physical and the mental have nothing to do with each other. Mind/soul is the only real part of a person.

Norbert Wiener (1948): I don’t know about that “only real part” business, but the mind is absolutely the seat of the command and control architecture of information and the ability to reflexively reverse entropy based on context, and input/output feedback loops.

Alan Turing (1952): Huh. I wonder if what computing machines do can reasonably be considered thinking?

Wiener: I dunno about “thinking,” but if you mean “pockets of decreasing entropy in a framework in which the larger mass of entropy tends to increase,” then oh for sure, dude.

John Von Neumann (1958): Wow things sure are changing fast in science and technology; we should maybe slow down and think about this before that change hits a point beyond our ability to meaningfully direct and shape it— a singularity, if you will.

Clynes & Klines (1960): You know, it’s funny you should mention how fast things are changing because one day we’re gonna be able to have automatic tech in our bodies that lets us pump ourselves full of chemicals to deal with the rigors of space; btw, have we told you about this new thing we’re working on called “antidepressants?”

Gordon Moore (1965): Right now an integrated circuit has 64 transistors, and they keep getting smaller, so if things keep going the way they’re going, in ten years they’ll have 65 THOUSAND. :-O

Donna Haraway (1991): We’re all already cyborgs bound up in assemblages of the social, biological, and techonological, in relational reinforcing systems with each other. Also do you like dogs?

Ray Kurzweil (1999): Holy Shit, did you hear that?! Because of the pace of technological change, we’re going to have a singularity where digital electronics will be indistinguishable from the very fabric of reality! They’ll be part of our bodies! Our minds will be digitally uploaded immortal cyborg AI Gods!

Tech Bros: Wow, so true, dude; that makes a lot of sense when you think about it; I mean maybe not “Gods” so much as “artificial super intelligences,” but yeah.

90’s TechnoPagans: I mean… Yeah? It’s all just a recapitulation of The Art in multiple technoscientific forms across time. I mean (*takes another hit of salvia*) if you think about the timeless nature of multidimensional spiritual architectures, we’re already—

DARPA: Wait, did that guy just say something about “Uploading” and “Cyborg/AI Gods?” We got anybody working on that?? Well GET TO IT!

Disabled People, Trans Folx, BIPOC Populations, Women: Wait, so our prosthetics, medications, and relational reciprocal entanglements with technosocial systems of this world in order to survive makes us cyborgs?! :-O

[Simultaneously:]

Kurzweil/90’s TechnoPagans/Tech Bros/DARPA: Not like that. Wiener/Clynes & Kline: Yes, exactly.

Haraway: I mean it’s really interesting to consider, right?

Tech Bros: Actually, if you think about the bidirectional nature of time, and the likelihood of simulationism, it’s almost certain that there’s already an Artificial Super Intelligence, and it HATES YOU; you should probably try to build it/never think about it, just in case.

90’s TechnoPagans: …That’s what we JUST SAID.

Philosophers of Religion (To Each Other): …Did they just Pascal’s Wager Anselm’s Ontological Argument, but computers?

Timnit Gebru and other “AI” Ethicists: Hey, y’all? There’s a LOT of really messed up stuff in these models you started building.

Disabled People, Trans Folx, BIPOC Populations, Women: Right?

Anthony Levandowski: I’m gonna make an AI god right now! And a CHURCH!

The General Public: Wait, do you people actually believe this?

Microsoft/Google/IBM/Facebook: …Which answer will make you give us more money?

Timnit Gebru and other “AI” Ethicists: …We’re pretty sure there might be some problems with the design architectures, too…

Some STS Theorists: Honestly this is all a little eugenics-y— like, both the technoscientific and the religious bits; have you all sought out any marginalized people who work on any of this stuff? Like, at all??

Disabled People, Trans Folx, BIPOC Populations, Women: Hahahahah! …Oh you’re serious?

Anthony Levandowski: Wait, no, nevermind about the church.

Some “AI” Engineers: I think the things we’re working on might be conscious, or even have souls.

“AI” Ethicists/Some STS Theorists: Anybody? These prejudices???

Wiener/Tech Bros/DARPA/Microsoft/Google/IBM/Facebook: “Souls?” Pfffft. Look at these whackjobs, over here. “Souls.” We’re talking about the technological singularity, mind uploading into an eternal digital universal superstructure, and the inevitability of timeless artificial super intelligences; who said anything about “Souls?”

René Descartes/90’s TechnoPagans/Philosophers of Religion/Some STS Theorists/Some “AI” Engineers: …

[Scene]

———– ———– ———– ———–

and read more of this kind of thing at: Williams, Damien Patrick. Belief, Values, Bias, and Agency: Development of and Entanglement with “Artificial Intelligence.” PhD diss., Virginia Tech, 2022. https://vtechworks.lib.vt.edu/handle/10919/111528.

Avatar
reblogged

elon musk should kill himself elon musk needs to kill himself elon musk would make society as a whole better if he killed himself now

Sorry to break the theme of the blog but a lot of people who post their incredible works here use Twitter so

Reblogging this for a second time because your art will also be used in AI training models, apparently.

Avatar
coldalbion

Artist pals, heads up

I know i don't post around here much anymore, but this is pretty important

Avatar
reblogged

My New Article at WIRED

So, you may have heard about the whole zoom “AI” Terms of Service  clause public relations debacle, going on this past week, in which Zoom decided that it wasn’t going to let users opt out of them feeding our faces and conversations into their LLMs. In 10.1, Zoom defines “Customer Content” as whatever data users provide or generate (“Customer Input”) and whatever else Zoom generates from our uses of Zoom. Then 10.4 says what they’ll use “Customer Content” for, including “…machine learning, artificial intelligence.”

And then on cue they dropped an “oh god oh fuck oh shit we fucked up” blog where they pinky promised not to do the thing they left actually-legally-binding ToS language saying they could do.

Like, Section 10.4 of the ToS now contains the line “Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent,” but it again it still seems a) that the “customer” in question is the Enterprise not the User, and 2) that “consent” means “clicking yes and using Zoom.” So it’s Still Not Good.

Well anyway, I wrote about all of this for WIRED, including what zoom might need to do to gain back customer and user trust, and what other tech creators and corporations need to understand about where people are, right now.

And frankly the fact that I have a byline in WIRED is kind of blowing my mind, in and of itself, but anyway…

Also, today, Zoom backtracked Hard. And while i appreciate that, it really feels like decided to Zoom take their ball and go home rather than offer meaningful consent and user control options. That’s… not exactly better, and doesn’t tell me what if anything they’ve learned from the experience. If you want to see what I think they should’ve done, then, well… Check the article.

Until Next Time.

Avatar
reblogged

My New Article at American Scientist

As of this week, I have a new article in the July-August 2023 Special Issue of American Scientist Magazine. It’s called “Bias Optimizers,” and it’s all about the problems and potential remedies of and for GPT-type tools and other “A.I.”

This article picks up and expands on thoughts started in “The ‘P’ Stands for Pre-Trained” and in a few threads on the socials, as well as touching on some of my comments quoted here, about the use of chatbots and “A.I.” in medicine.

I’m particularly proud of the two intro grafs:

Recently, I learned that men can sometimes be nurses and secretaries, but women can never be doctors or presidents. I also learned that Black people are more likely to owe money than to have it owed to them. And I learned that if you need disability assistance, you’ll get more of it if you live in a facility than if you receive care at home.
At least, that is what I would believe if I accepted the sexist, racist, and misleading ableist pronouncements from today’s new artificial intelligence systems. It has been less than a year since OpenAI released ChatGPT, and mere months since its GPT-4 update and Google’s release of a competing AI chatbot, Bard. The creators of these systems promise they will make our lives easier, removing drudge work such as writing emails, filling out forms, and even writing code. But the bias programmed into these systems threatens to spread more prejudice into the world. AI-facilitated biases can affect who gets hired for what jobs, who gets believed as an expert in their field, and who is more likely to be targeted and prosecuted by police.

As you probably well know, I’ve been thinking about the ethical, epistemological, and social implications of GPT-type tools and “A.I.” in general for quite a while now, and I’m so grateful to the team at American Scientist for the opportunity to discuss all of those things with such a broad and frankly crucial audience.

I hope you enjoy it.

Avatar
reblogged

The "P" Stands for Pre-trained

I know I’ve said this before, but since we’re going to be hearing increasingly more about Elon Musk and his “Anti-Woke” “A.I.” “Truth GPT” in the coming days and weeks, let’s go ahead and get some things out on the table:

All technology is political. All created artifacts are rife with values.

I keep trying to tell you that the political right understands this when it suits them— when they can weaponize it; and they’re very VERY good at weaponizing it— but people seem to keep not getting it. So let me say it again, in a somewhat different way:

There is no ground of pure objectivity. There is no god’s-eye view.

There is no purely objective thing. Pretending there is only serves to create the conditions in which the worst people can play “gotcha” anytime they can clearly point to their enemies doing what we are literally all doing ALL THE TIME: Creating meaning and knowledge out of what we value, together.

Avatar
reblogged

Further Thoughts on the "Blueprint for the AI Bill of Rights"

So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the “Blueprint for the AI Bill of Rights” (henceforth “BfaAIBoR” or “blueprint”) a few weeks after that, I am both very interested also pretty worried to see what direction research into “artificial intelligence” is actually going to take from here.

To be clear, my fundamental problem with the “Blueprint for an AI bill of rights” is that while it pays pretty fine lip-service to the ideas of  community-led oversight, transparency, and abolition of and abstaining from developing certain tools, it begins with, and repeats throughout, the idea that sometimes law enforcement, the military, and the intelligence community might need to just… ignore these principles. Additionally, Dr. Prabhakar was director of DARPA for roughly five years, between 2012 and 2015, and considering what I know for a fact got funded within that window? Yeah.

To put a finer point on it, 14 out of 16 uses of the phrase “law enforcement” and 10 out of 11 uses of “national security” in this blueprint are in direct reference to why those entities’ or concept structures’ needs might have to supersede the recommendations of the BfaAIBoR itself. The blueprint also doesn’t mention the depredations of extant military “AI” at all. Instead, it points to the idea that the Department Of Defense (DoD) “has adopted [AI] Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its [national security and defense] activities.” And so with all of that being the case, there are several current “AI” projects in the pipe which a blueprint like this wouldn’t cover, even if it ever became policy, and frankly that just fundamentally undercuts Much of the real good a project like this could do.

For instance, at present, the DoD’s ethical frames are entirely about transparency, explainability, and some lipservice around equitability and “deliberate steps to minimize unintended bias in Al …” To understand a bit more of what I mean by this, here’s the DoD’s “Responsible Artificial Intelligence Strategy…” pdf (which is not natively searchable and I had to OCR myself, so heads-up); and here’s the Office of National Intelligence’s “ethical principles” for building AI. Note that not once do they consider the moral status of the biases and values they have intentionally baked into their systems.

Avatar

Just seeing this 2019 video for the first time and holy shit

We tested 15 thousand common words and phrases against Youtube's bots, one by one, and determined which of those words will cause a video to be demonetized when used in the title. If we took a demonetized video and changed the words "gay" or "lesbian" to "happy" or "friend," every single time the status of the video changed to advertiser-friendly.

Demonetized terms include blacks, environmental, ethical, Ethiopia, female, gender, gay, ghetto, healing, health, hemp, HIV, homosexual, Israel, Lesbians, LGBT, mother, Muslims, Palestine, racism, spokesman, sympathy, transition, & victims.

Avatar

my friend posted this on twitter & nobody has ever been more correct

I have never seen this post before but i was literally thinking almost those exact words in the last week of my dissertation writing and defense, this past month. like literally the exact sentiment. Goddam.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.