Avatar

vanTumblr

@vaniver / vaniver.tumblr.com

Matthew Graves in meatspace, Vaniver most other places
Avatar
reblogged
Avatar
raginrayguns
Everyone knows that food is ultimately produced by plants, though we may get it at second or third hand if we eat animals or their products. But the average plant turns most of its sugar not into starch which is digestible, but into cellulose which is not, but forms its woody skeleton. The hoofed animals have dealt with this problem in their own way, by turning their bellies into vast hives of bacteria that attack cellulose, and on whose by-products they live. We have got to do the same, but outside our bodies. It may be done on chemical lines. Irvine has obtained a 95% yield of sugar from cellulose, but at a prohibitive cost. Or we may use micro-organisms, but in any case within the next century sugar and starch will be about as cheap as sawdust.

I still don't really get why this hasn't happened. It wouldn't exactly be sugar as we're used to it, it would be glucose rather than sucrose, but that's still sweet.

Avatar
vaniver

(At time of writing, sugar is about $620 a ton and sawdust is about $50 a ton.) Also how hard is it to make a plant that's just, like, constantly leaking syrup?

easy, it's called a maple tree

@vaniver said:

They only leak syrup during the winter, right? I want a summertime maple (where the hope is that you can grow it places where it will be warm enough to net produce sugar year-round)

@vaniver said:

> Maple syrup is a syrup made from the sap of maple trees. In cold climates, these trees store starch in their trunks and roots before winter; the starch is then converted to sugar that rises in the sap in late winter and early spring. Maple trees are tapped by drilling holes into their trunks and collecting the sap, which is processed by heating to evaporate much of the water, leaving the concentrated syrup.
Avatar
algorizmi

During harvest season maple sap is ~3% sucrose. An event more concentrated source is nectar from flowers. Which across many species is closer to 50% and may be as high as 80% sugar. https://onlinelibrary.wiley.com/doi/full/10.1002/ps.4321

I'm not aware of any serious effort to breed plants specifically for honey production. Considering the progress in corn yield, I'd expect there's much room for improvement in nectar volume and bloom duration.

Agastache foeniculum, a kind of mint, has a reported yield[1] of over 2,700 pounds of honey per acre. (similar energy content to ~50 bushels of corn)

[1]page 225 from https://cdn.permaculturenews.org/files/bee_friendly_planting_guide.pdf

Husband points out that cane sugar and sugar beets are a very high percentage of sucrose/glucose, and the original quote is about the _average_ plant but we don't care very much about that. I think an acre of sugar beets gets you about 20 tons at 15% sugar -> about 6,000 pounds of sugar?

Avatar
reblogged
Avatar
raginrayguns
Everyone knows that food is ultimately produced by plants, though we may get it at second or third hand if we eat animals or their products. But the average plant turns most of its sugar not into starch which is digestible, but into cellulose which is not, but forms its woody skeleton. The hoofed animals have dealt with this problem in their own way, by turning their bellies into vast hives of bacteria that attack cellulose, and on whose by-products they live. We have got to do the same, but outside our bodies. It may be done on chemical lines. Irvine has obtained a 95% yield of sugar from cellulose, but at a prohibitive cost. Or we may use micro-organisms, but in any case within the next century sugar and starch will be about as cheap as sawdust.

I still don't really get why this hasn't happened. It wouldn't exactly be sugar as we're used to it, it would be glucose rather than sucrose, but that's still sweet.

Avatar
vaniver

(At time of writing, sugar is about $620 a ton and sawdust is about $50 a ton.) Also how hard is it to make a plant that's just, like, constantly leaking syrup?

Avatar

bringing napoleon to the modern world and showing him all the powerful states and liberal republics like that doctor who van gogh scene

Avatar
reblogged
Avatar
raginrayguns

two chapters in to greer's "Inside a Magical Lodge" (thnx @fruityyamenrunner for the rec). A lot of it so far is just about the practicalities of administering any small group where everyone has day jobs and meets infrequently. Sometimes I was reminded of a Dungeons and Dragons group, sometimes I thought LessWrong meetups could use some of these rules and procedures, to deal with various kinds of annoying person and situation. Although LessWrong is like, pseudoacademic in some ways, and a better source of meetup structure might be academic conferences or lab meetings. Aside from that though a lot of it seems strange and foreign and I'm struggling to understand it as anything but words on a page. Trying to take some vocabulary and relate it to stuff I'm familiar with.

  • The recognition sign. This is I like your shoelaces, thanks I stole them from the president. Doesn't seem to make any sense to me in modern times. Yes, I meet certain kinds of "insiders" when I travel (tumblr usrs, lesswrongers, academics), but this is because I talk to them online and arrange these meetings in advance.
  • Those interested in magical work vs posers interested in make believe and dressup. This was confusing to me, because I thought all magical rituals were make believe and dress up. But I realize it's analogous to something I understand, which is discussion. Like, there's certain people in a conversation just for the excitement of being in a conversation like that, getting to use some of their technical knowledge. I think some of these people are trying to enter a community and should be welcomed, and I have not always been kind enough to them. But there is a reason for that, which is I'm trying to get something out of the conversation, something intangible and hard to explain. You could call it "insight". This is true with LessWrong stuff and academic stuff. In both cases there is potential for some "tangible" result if some insight from a conversation leads to a paper or a post, but mostly that is not the case and there's still a difference between a "real" and "fake" conversation. Steven Pinker's description of Jeffrey Epstein is a good example: "He likes schmoozing with smart and intellectual people, but he couldn’t really or had very little interest in exploring an issue. He’d wisecrack, change subjects, or get bored after a few seconds. He’s a kibitzer more than a serious intellectual." SO while LessWrong meetup groups are centered around discussion rather than ritual, the same problem the magicians have, of an intangible goal that not everyone there is actually interested in, emerges.
  • The mundane vs the magical. Ayn Rand wants to make industrial society seem exciting, Greer takes for granted that it isn't and people need some escape. So there's "two worlds", the mundane and the magical, and the lodge is supposed to have the feeling of being between worlds. Greer explains a lot about lodges according to this requirement. This is why the meetings are indoors (LessWrong meetups, on the other hand, are often outdoors, like in public parks). This is why they have opening and closing rituals. Interestingly, the secrecy contributes to this. I've wondered about the consequences of MIRI's imitation of the secrecy of nuclear research, but I've never heard any emotional effect like this mentioned.
  • Lineage, Grand Lodges, charters, secret chiefs. This got me thinking of a silly thought experiment: what if Ziz started a LessWrong meetup group? Presumably she'd be condemned, there'd be some warning from more "central" rationalists not to attend. So there's nothing official, but I'm pretty sure she's implicitly excommunicated. I, on the other hand, could start one, as long as I don't conflict with one already existing in the city. But there's nothing official like a charter from a grand lodge. Or is there? If I posted the announcement on LessWrong people could see my karma and past posts; this is effectively central recordkeeping, just efficiently automated. DEoes that make Lightcone the Grand Lodge? But the New York LessWRong meetup group is so old it's an "Overcoming Bias" meetup group, and I don't think their legitimacy derives from the california people. So it's all informal but I feel like I see the structure of social relations that the lodges formalized.
Avatar
reblogged
Avatar
algorizmi

Thank you Vasili Arkhipov, who 60 years ago today averted nuclear war by disagreeing with 2 other people and stopped the launch of nuclear weapons from their submarine during the cuban missile crisis.

May we all have such clear decision making through uncertainty and the strength to hold up to peer pressure.

Avatar
reblogged
Avatar
raginrayguns

I feel like I can't explain the significance Yudkowsky had to me to young people because you grew up with bookstores stocking "Thinking, Fast and Slow" and "Superforecasting". When I was your age there was the Sequences and that was it. Aside from reading the more technical original sources but how would you even know to do that if nobody told you about it in a readable way? This whole "judgments are made in some way and we can treat the judgment-making process from a scientific or engineering perspective and measure how well it does" perspective had a wave of popularization, which I think Nate Silver was also a part of, but this was primarily in the 2010s. Any of us who got in on it early, which is me, @youzicha, @nostalgebraist, @vaniver, etc, you know us old folk, we heard about it first from Yudkowsky. Not necessarily all Yudkowsky fans but I think even @nostalgebraist read stuff in Yudkowsky, read Tetlock later, and was like, huh, it's a reasonable version of some of the stuff Yudkowsky fans are on about, so he heard it from Yudkowsky first (even though the reason it was already familiar was that a bunch of Sequences posts are summaries of Tetlock). It changed my life by motivating me to study statistics, which everyone thinks is dry, but if you think of it as the scientific study of processes by which guesses are made, well I think it's very exciting.

Avatar
vaniver

Hmm I don't think I heard “it” first from Yudkowsky; I think I heard it first from a lot of the technical original sources. I read Epistemology and the Psychology of Human Judgment back in 2009 because it was linked on the xkcd forums, I took decision analysis classes in graduate school around 2011, my master's degree  in 2012 was basically statistics plus industrial rationality, and around this point I found LW thru the HPMOR author notes (and HPMOR of course was linked on the xkcd forums).

I did have much more in common with the rationalists than the OR folks; I never really tried to get any of my classmates/labmates into HPMOR or LW, mostly because I didn’t expect it to succeed. (I did once happen to introduce a former labmate to Michael Vassar at a SENS life extension conference, now that I think about it, but that didn’t go anywhere.) I wasn’t aware of any of the OR folks that were interested in the breadth of technical topics that I was or the psychology angle (I do think I got introduced to Kahneman, Gendlin, and Korzybski specifically from LW; like definitely being around LW deepened my thinking and polished my cognitive style for a lot of this stuff).

Avatar
reblogged

It's interesting to me how much people struggle to intuit differences of scale. Like, years of geology training thinking about very large subjects, and I'm only barely managing it around the edges.

The classic one is, of course, the mantle- everybody has this image of the mantle as a sort of molten magma lake that the Earth's crust is floating on. Which is a pedagogically useful thing! Because the intuitions about how liquids work- forming internal currents, hot sections rising, cool sections sinking, all that- are all dynamics native to the Earth's mantle. We mostly talk about the mantle in the context of those currents, and how they drive things like continental drift, and so we tend to have this metaphor in mind of the mantle as a big magma lake.

The catch, of course, is that the mantle is a solid, not magma. It's just that at very large scales, the distinction between solids and liquids is... squirrely.

When cornered on this, a geologist will tell you that the mantle is 'ductile'. But that's a lie of omission. Because it's not that the mantle is a metal like gold or iron, what we usually think of when we talk about ductility. You couldn't hammer mantle-matter in to horseshoes or nails on an anvil. It's just a rock, really. Peridotite. Chemically it's got a lot of metal atoms in it, which helps, but if you whack a chunk of it with a hammer you can expect about the same thing to happen as if you whacked a chunk of concrete. Really, it's just that any and every rock is made of tons and tons of microcrystal structures all bound together, and the boundaries between these microcrystals can shift under enormous pressure on very slow timescales; when the scope of your question gets big enough, those bonds become weak in a relative sense, and a rock starts to become more like a pile of gravel where the pebbles can shift and flow around one another.

The blunt fact is, on very large scales of space and of time, almost everything other than perfect crystals start to act kind of like a liquid- and a lot of those do as well. When I made a study of very old Martian craters, I got used to 'eyeballing' the age based on how much the crater had subsided, almost exactly like the ways that ripples in the surface of water gradually subside over time when you throw a rock in to a lake. Just, you know. Slower.

But at the same time, these things are more fragile than you'd believe, and can shatter like glass. The surface of the Earth is like this, too. Absent the kind of overpressures that make the mantle flow like it does, Earth's crust is still tremendously weak relative to many of the planet-scale forces to which it is subject- I was surprised, once, when a professor offhandedly described the crust as having a tensile strength of 'basically zero;' they really thought of the surface as a delicate filigreed bubble of glass that formed like a thin shell, almost too thin to mention, on the outside of a water droplet. On human scales, liquid is the thing that flows, and solid is the thing that breaks. But once stuff gets big or slow or both, the distinction between a solid and a liquid is more that a liquid is the thing that doesn't shatter when it flows. And it all gets really, really vague, which I suppose you'd expect when you get this far outside the contexts in which our languages were crafted.

Avatar
alkatyn

"on a sufficiently large scale everything is a liquid" needs to be inscribed on something

Avatar
vaniver

Thales Was Right

Avatar
reblogged
Avatar
bizabizow

Scalpers: There is a low supply and high demand for PS5s right now. I will buy up a bunch of them at market price just to resell at my new inflated price for personal profit

Average Person: Dude, you’re scum. This should be illegal

Scalpers: There is a low supply and high need for affordable housing right now. I will buy up a bunch of houses at their affordable price just to charge people my new inflated monthly price to live in them, without actually owning them. For personal profit.

Average Person: You may not like it, but this is a valid business practice and a necessary part of adult life. This is our free market at work, and-

nuclear take: it's the first "Average Person" that's in the wrong

I don't think the problem would be solved by making it illegal, but it's clearly scummy and to no one's benefit but the scalper's.

The alternatives to scalping are rationing, queues, lotteries, and corruption, or some combination of the preceding. You cannot fix a supply issue without creating more supply.

Avatar
tanadrin

i want to elaborate on something without substantively disagreeing, it’s just a point about rhetorical framing that’s gonna bug me unless i make it

that is to say, the moral intuition around prices--that if it weren’t for people willing to defect against society at large in order to enrich themselves price rises in the face of scarcity would be less severe--is also not factually wrong. it’s an intuition geared for small communities, like a lot of our moral intuitions, and so for the purposes of framing policy in large and complex economies it is insufficient on its own. but it is a reasonable intuition.

the economics perspective is to say, because this intuition is insufficient, it is not only a poor basis for framing policy (correct) but is fundamentally unreasonable and should be disregarded completely. i think this is wrong! economics often claims to be creating a purely descriptive model, sufficient for framing policy, but smuggles in prescriptive judgements, like “efficiency (for a definition of the term that would be strange to anyone who isn’t an economist) is inherently morally good,” which are often alien to or outright opposed to ordinary moral intuitions around fairness

and by not carefully distinguishing between the two, people begin to (again, not incorrectly IMO) suspect the descriptive elements of economics are being used to, or are actually being warped contrary to evidence to, advance a specific ideological agenda, typically a laissez-faire pro-business agenda. and some economists are in fact doing this, because their bailey is “economics is just a descriptive science” and their motte is “laissez-faire capitalism is a terminal, not instrumental good, which requires no substantive justification”

now, because in a complex modern economy with billions of people, we cannot avoid there being purely profit-seeking individuals who will do anything to get ahead, any system for distributing scarce goods is going to have to reckon with the fact that price controls on their own are usually ineffective (the “economics 101″ argument that price controls are always a bad idea is simply wrong, but they do have clear downsides policymakers need to be aware of). but that doesn’t mean individuals are wrong when they notice scalpers are 1) buying up a scarce good, that 2) they don’t intend to use themselves, and 3) making it much more expensive than it otherwise would be. that their reaction is resentment is perfectly reasonable! pricing scarce goods at a low value doesn’t make them more abundant, but it does make them cheaper, which matters to the people who are able to buy them in the end, and raising the price through speculation might be more “efficient” (in the sense you will get a more accurate signal on the value of ps5s or taylor swift tickets or w/e), but people who can’t afford them at the higher price, who might have lucked into a chance to buy them at the lower price, are not being fundamentally irrational to be annoyed. it’s rational self-interest! a small chance to buy a desired good is more appealing than no chance to buy a desired good because it’s priced out of your reach

and this is the point that’s always bugged me about this kind of economics 101 framing of price and distribution of scarce goods--your economics 101 guy will call the market with the scalpers in it more “efficient” in some sense. but efficient for whom? it facilitates access to the good for people with more money, to whom the marginal value of a dollar is smaller, because it prices all the people who couldn’t afford the good out of the market to acquire the good. but it’s no better at getting the good to people who want the good. it just excludes a large potential segment of the customer base. this market is more profitable for the seller, if the price of the good is variable and the seller can update their prices when they notice the existence of scalpers, and it’s better for the seller if the thing they care about is maximizing their monetary profit. it’s a more efficient system for society at large (in terms of the most number of people having their preferences satisfied) if and only if increased profit for the seller allows them to increase supply--so, a good system for game consoles, which you can build more of barring supply chain issues, but not for concert tickets, which are limited by the size of the venue you booked in advance.

moreover, the scalpers and speculators in this scenario are doing something that looks suspiciously like rent extraction, getting quite a lot of money for very little work, without actually adding value at all. we already established they’re not functionally increasing access to the good--they are in fact limiting access! in the game console scenario, the console manufacturer might eventually raise their prices, meaning the scalper can sell their inventory for less than the retail price but more than they bought it for, but if it takes that long for the scalper to move their inventory, then they probably got greedy set their price too high to begin with when the console was cheap--and as the manufacturer ramps up production, consumers can expect the price to fall further, so the speculator may have actually just screwed themselves

the moral intuition against scalpers is very similar to the moral intuition against rent-seekers (which economists tend to share!), and i think it fulfills a similar social function. so i don’t think people should be too hard on it, even if they see things from a more economics-based perspective.

@tanadrin Isn't it obviously better at getting the good to the people how want the good, because they can pay more for it? There may be some distortion because different people have different amounts of money, but allocating the good to whoever happened to look at a website at a given time is also very distorting, a priori it doesn't seem like that would do any better job.

this is the standard econ 101 analysis, and i think it’s insufficient. i think ability to pay is a bad proxy for “wants it more,” because the marginal value of a dollar is wildly different at different income/wealth levels

this analysis would, IMO, make more sense in a society with a very even income distribution. but as it stands, you can effectively lock large numbers of people out of a market entirely by setting your price high enough, no matter how badly they might want the good in question--healthcare in the US is actually a pretty good example of this phenomenon at its extremes, in that it’s something that most people find extremely important (nobody wants to be sick; nobody wants to die!), but people without insurance frequently can’t afford it at any price.

the spherical cows conception of price and markets assumes there isn’t much difference in the ability of different people to pay, only willingness, because it handwaves away these differences in wealth distribution, but this seems like one of those areas where the simplifying assumption is actually smuggling in quite a lot of ideological baggage. wealth isn’t distributed very evenly, and never has been! and while you might want to use willingness to pay as a proxy for wants it more--because price is transparent in a way that inmost desires are not--it’s rather like the old joke about the drunk looking for his keys on the street, even though he lost them in the alley, because “that’s where the light is.”

One thing worth mentioning:

The PS5 scalper isn't actually providing any value to society. They're not making the product. They're not the retailers that are buying at wholesale and selling at MSRP.

In fact: by inserting themselves into the process, they are adding friction and mistrust into a market that's already experiencing scarcity.

So while there's a conversation to be had about price discrimination and efficient distribution... Scalpers aren't actually a good model for that.

yeah, and people treat landlording like it's scalping, which is wrong. the ps5 is sitting there on the truck, ready to be shipped off to a Gamer in need, and the scalper comes in and says "how about I shove my dick in this and make you pay me for what you were about to buy anyway." without the scalper, someone who isn't the scalper gets a ps5.

that is not the case with housing (apartments anyway). The housing is expensive to build and won't be built without the promise of making money back for it, either as the landlord or by selling the rights to someone who will be the landlord and expect it to eventually pay off. the apartments are not lying around only to be snapped up by evil landlords getting in the way -- landlords are in fact the customers for housing blocks and why they get made. they have to do the work of maintaining and coordinating in the building as well as all the building code compliance shit. some of them are shitty and neglect these duties but some people at every kind of job are shitty. if you think that your building would be fine without a landlord, fine, buy it! turn it into a co-op, people do that, it won't solve nearly as many problems as you think (everything costs SO much more than you think it does) but it might solve some. that doesn't mean the landlord was stealing from you it meant you could make a mutually beneficial trade.

the ultimate thing about scalpers, actually scalpers, is that they are capturing value by capitalizing on demand that the creator should be capitalizing on. If scalpers are selling PS5s for a thousand dollars, PS5s are worth a thousand dollars. If Sony was getting that money, they could be spending it on more PS5 production, so they can sell more PS5s but at less than a thousand dollars. If scalpers are selling tickets to Gorm Goombus's live concert for ten times the sticker price, the tickets are worth ten times as much as what Hololive is getting, and Hololive and Goombus should be getting that money and next time they'll be able to book a bigger venue. Scalpers exist when demand outstrips supply but the supplier won't actually capitalize on the demand and thus get the ability to alleviate the supply issue. That's what scalping is, but it's nonsensical to apply it to landlords (except the ones who are paying rent and subletting but basically nobody's complaints are restricted to those.)

Avatar
vaniver
The PS5 scalper isn't actually providing any value to society.

They’re moving a PS5 from someone who wants it less to someone who wants it more, which is providing value.

In response to brazenautomaton:

I agree that the scalper is taking gains-from-trade that the creator is leaving on the ground (and the creator should do it instead). 

It’s less obvious that the landlord is innocent in a world where lots of housing construction trades are blocked (or taxed) by regulations, as the landlord who owns housing supply benefits from the restriction of housing supply and might thus advocate for that restriction. (It’s complicated because every landlord wants to be able to build themselves while their competitors don’t build, and not everyone votes in favor of their class interest, and all that; it’s easy to have a city where the landlords are pro-building and the non-landlord owners are anti-building and the renters focus on how the landlords are bleeding them, and not on the underlying causes of the situation.) 

Avatar
reblogged
Avatar
raginrayguns

why did EY stop talking about self improving AI? It used to be his whole thing. Not just in his pre-"friendly AI" days when he was getting funding to write a "seed AI", but in his AI safety days when it seemed like he was basically trying to extend formal verification of programs to self-modifying programs. But I haven't heard him mention it in years? Like he didn't talk about it in the "list of lethalities" did he? Although I read that a long time ago, now. You'd think he'd be saying something there like "if it's a little superhuman it will get very superhuman" as an item, I mean he argued for that extensively in "Intelligence Explosion Microeconomics". What happened to this whole side of the EY thesis?

I suspect it was a "don't want to give the deep learning people any ideas" thing. The whole "infohazard" side of the MIRI people is so silly and toxic. But idk whether this was an instance. And maybe I'm just overlooking recent mentions.

Avatar
vaniver

I think it’s mostly because the revealed capabilities of AI systems were impressive enough that it no longer seemed like a necessary part of the argument.

In ~2000, it didn’t look like you would be able to build Skynet (or even a Terminator) without major advances in software engineering, and so positing a computer system that makes advances in software engineering and then builds Skynet looked possible. In ~2016, it looked like you might be able to build a relatively small number of modules that would fit together to make Skynet, at which point you no longer need to posit recursive self-improvement; you could imagine Cyberdyne making it.

Avatar
reblogged
Avatar
jadagul

I really feel like Nate Silver should be more of a rationalist icon than he is.

He's an outspoken Bayesian who believes in probabilistic reasoning and does explicit calibration tests of himself. He's incredibly good at this, producing consistently well-calibrated forecasts across multiple domains. (And he's winning!)

He even exemplifies the actual primary rationalist virtue, which is pissing people off by being contrarian on the internet.

I suspect he loses points for forecasting with actual math, rather than putting a number on his intuition and muttering something about priors. Doing math in your Bayesian updates is cheating!

Avatar
vaniver

Ok that was my joke answer; my serious answer is that his primary focuses are politics and sports. I think lots of rationalists view his models of politics as a sensible baseline but don’t think about politics much, and think about sports even less.

Avatar
reblogged
Avatar
vaniver

Dwarf Fortress

I’m playing it again after many years, and noticing some of the new features.

But this bit of a dwarf’s personality spoke to me:

He is moved by art and natural beauty, but he is troubled by this since he dislikes the natural world.

Dwarf Fortress is now available on Steam! It’s got a graphical tileset and a mouse UI, so it’s somewhat more accessible than it used to be.

Avatar
reblogged

The ZA/UM debacle is so so heart wrenching man it's tragic, it's poetic, it's literally more of a thematic sequel to Disco Elysium than anything the remaining shell of the company will ever be able to churn out.

So so so fucking bleak.

I am not joking or exaggerating when I say whoever is responsible for the disbandment needs to get Mussolini'd.

Avatar
skyberia

[image description: Four screenshots of black text on a red background, transcribed below.

In child psychology, a paracosm is a mental construct developed by (often lonely) children and early teenagers. It is a fantasy world secluded from ours, featuring new words for common and novel phenomenon, intricate taxonomies of nations, animals etc. Emily Brontë had one. Henry Darger had one. Children tend to forget their paracosms as the Real World imposes its terms (around 13-15).

That did not happen to Elysium. Elysium was always going to be massive. Large enough to blot out our entire reality. Messianic. Transatlantic.

Elysium: the Crown of the World.

Elysium: the Real World is an embarrassing fantasy construct and Elysium is real.

Hence, Elysium survived contact with the Real World through competition. It had its genesis during the turn of the century as a high fantasy setting. With - I would say - "some interesting ideas." Back then we were over the moon about it. We wrote incessantly. Mostly spells, hundreds if not thousands of them, each exactly one page. We visited Elysium via pen-and-paper role-playing, using a proprietary system that later became Disco Elysium's Metric. "We" were a group of 5-10 highschool dropouts called The Overcoats (it was terribly cold outside and we wore thick coats), anarchists of some sort, with the motto: "Today we drink tea; tomorrow we rule the world." Unironically, we intended Elysium to be the vessel of this conquest.

After all - it was all we had. Truancy means vagrancy, unemployment, an assortment of mental illnesses. Seeing your friends go off to University to become "real people" and have things like a PC to play Baldur's Gate 2 on. The need for a paracosm did not dissipate as the Aughts rolled on - it intensified. With nowhere to go and -22 centigrade temperatures outside, we knew we had to become "artist-people" of some sort to survive. Yet it was hard to write anything in this "fallen world" as early Christians put it. The names themselves seemed compromised, a catwalk parody: London, Milan, Paris. A shudder of loathing still overtakes me as I write them. Revachol, Mirova, La Scala del Mesque - now that I could write. An implacable air hung over the states and cities. The cold light of the mind. Grand. A quality we've come to call elytical. Even basic terms for everyday machinery needed to be changed to preserve this intangible quality. Motor carriage. Graffito. Sprechgesang. (What they call rapping.) However, the version of Elysium we had then was not that. It was "Revachol, something-something, name missing, something lame." After a year or two of spell-writing the result was deemed "weak." Naive (which it was). We couldn't bin it, however - it was too big to fail. So we started replacing things: names, concepts, characters. Everything smaller and less credible than reality had to go. Circa 2002, we invented the pale. By 2005, we'd discarded medievalism, the pseudo-renaissance, and the industrial revolution, replacing it with modernity: plastic telephones, cops, communism, the international currency. (The spells, too, had to go. A term you have yet to encounter - extraphysics - pushed them out. Magic, we realized, needed to remain a complete unknown.) The world around us was getting larger and darker. To keep up, Elysium needed to be even larger and more terrifying. Moreover, the world that ends all worlds ought also be more beautiful than reality. More extreme. We were anarchists, after all - growing into hardboiled Marxist-Leninists on empty stomachs. The alternative need not only to outgrow, but also to outclass the Real World and its satanic complexes. It quickly became apparent that in order to go "further than Pärnu" (Pärnu is a tiny beach town 100 kilometres from Tallinn) we needed to outdo History.

So far we've only managed to show you a tiny, insignificant corner of it: the district of Martinaise in Revachol West, on Insulinde. I can not begin to tell you how introductory it is. ("Disco Elysium" means "I learn Elysium"). It's small. A matchbox world. It's all we had money for. Yet because of You - you angel, you legend, our comrade in arms - because of your interest in our idea, we get to see more of it. Jamrock, I hope. And then to other isolas. Thank you. We hope you enjoy the Final Cut. Robert Kurvitz, lead designer / lead writer Brighton, England December 2020

End ID.]

Avatar
vaniver

I was looking forward to a Disco Elysium sequel if ever they made one, but... how are we sure this isn’t just “turns out Marxist-Leninists are bad at not splintering, especially if they get a bunch of money”?

Avatar
reblogged
Avatar
raginrayguns

trade secrets make sense to me as legally protected property, patents theoretically but maybe not the real life patent system. Copyright im not sure; it has the same kind of appeal as trade secrets in that it’s an original creation but it’s not “naturally protectable” like a trade secret

@vaniver​ said:

Note that ‘trade secrets’ don’t really have much in the way of legal protection; I actually really like the trade secret vs. patent distinction, where you only get socially enforced temporary control of something *if* you tell the public how it works.
(ofc implementation details matter and our current system isn’t great)

ive read about ppl getting in trouble over trade scecrets… this guy

why do you say this if its something ppl can go to prison over

Avatar
vaniver

So it’s true that stealing trade secrets is illegal, but the thing that’s protected is the method of acquisition rather than the idea. If I patent “pasta sauce with both garam masala and five spice in it”, and you come up with the same brilliant idea a year later, I can sue you for infringement, even without proving (or there being) a causal connection between my marketing the sauce and you making the sauce.

But if the ingredients of my sauce are just a trade secret, there needs to be a person that leaked it (or you needed to have reverse engineered my sauce, which is itself a crime) for me to sue.

A neat thing about trade secrets is that they better assess “how hard was this to come up with?” since they allow for independent rediscovery. [If you can keep your recipe secret for a century, then you can still have your trade secret a century later!]

Avatar
reblogged

(buddhist youth pastor voice) i see you're refreshing tumblr. do you know what else is a painful and unending cycle of content

Avatar
reblogged

i feel like people who are very concerned about ai superintelligence are living in a totally alien world to me. not that this is a point against them, just that like...believing superintelligent AI is, with very high probability, going to either destroy the world or rule it, within the next ~10 years, would totally change my conception of myself and my life. from this perspective i guess i can sort of abstractly commend yud on dedicating himself to trying to align AI, given his beliefs thats the reasonable choice, but "i am capable of affecting whether superintelligent AI will destroy or save the world" is simply not an idea in my hypothesis space, so i would probably just. idk, get into hard drugs i guess? im not sure how fast those become a problem. im having trouble thinking things that have a positive expected benefit over the next 10 or so years, but negative past that, and so would only be reasonable in such a scenario. im sure there are some. balloon loans? take out a bunch of balloon loans and live large?

but also, i think this reveals some of the problems with "taking ideas seriously", which is that the human mind is capable of convincing itself of essentially arbitrary ideas, so if you take your crazy idea seriously you can totally destroy your life. you can sort of fix this with probabilities, like, you shouldnt let your assessment of out-there ideas get too high a probability (like, idk, above 90%), so normie ideas still have SOME influence. as a sort of safety net. but humans are bad at really grokking probabilities

I think you’re significantly wrong about how soon Eliezer thinks AI is happening, and how likely Eliezer thinks that is. The general pitch I’ve heard is “if there’s even a 1% chance of AI within 10 years, then we need at least a few insane prophets shouting about it now, because in 5 years it might be too late to change course.”

Climate change seems like a pretty good model for understanding “we need a bunch of people waving big red flags and thinking about stuff ASAP, because it takes a long time to course correct”

Also: lots of people lived through the Cold War without going insane or resorting to hard drugs, and there was definitely at least a 1% chance of the world ending if Stanislav Petrov or Vasily Arkhipov had made different decisions. So I think humans are actually fairly well wired to handle “there is a small chance the world ends tomorrow”

hmm, do you have a source for this? i agree thats an argument he uses to convince skeptics, but i cant find him explicitly giving a probability estimate on AI timelines he believes anywhere. i FEEL like i remember him giving the 10 year frame, a couple years ago, but i might be mixing him up with someone else

@skluug said:

as of 2017 EY gave apparent 50% odds of AI apocalypse by 2030
https://www.econlib.org/archives/2017/01/my_end-of-the-w.html

ok so im not crazy, he really does rate the probability pretty highly. cool

Avatar
raginrayguns
Me: I hereby forever abjure arguing we should care about an AGI ruin scenario "even if the probability is tiny". >10% seems like a fine threshold to me.
<Unnamed>: But even if the probability is tiny, we should -
Me: AIIIEEEEE <dies>
Avatar
vaniver

There’s a separate argument that seems important to mention also, that goes like this:

A: My median estimate of when we get superintelligent AI is 2050--
B: Ok, so we can ignore this and come back to it in 20 years?
A: --but my error bars are wide enough that I put >5% probability on it happening in the next 10 years. And so most of my actions now are based around trying to do good in those >5% of worlds, since they’re the ones I have the most impact on with my actions now, while I’m also looking for cheap ways to make things good if AI comes later.

This still isn’t a Pascal’s Mugging situation--A isn’t playing any tricks where “well, if you lower the probability, I can just raise the utility to compensate.”--it’s a situation where A thinks it’s possible to roll a 20 on a 20-sided die, and that should get planned around.

You are using an unsupported browser and things might not work as intended. Please make sure you're using the latest version of Chrome, Firefox, Safari, or Edge.