Apparently your tap water is dramatically colder than any house or apartment I’ve lived in.
Apparently your tap water is dramatically colder than any house or apartment I’ve lived in.
Yes, putting ice in water does make me enjoy it more, and no, letting the tap run doesn’t do nearly as much to cool it down as ice cubes do.
It’s really depressing how any internet discussion about global warming is full of comments like this which only exist to downplay small but existent improvements that others have made. It’s whataboutism, plain and simple, and only serves to discourage people from doing anything at all.
This guy getting a more efficient stove isn’t going to save the planet, but at least it helps. Your comment (and many others in this thread) doesn’t do anything at all about our climate problem, and mostly serves to make other people feel stupid and inadequate for even trying to do something.
There is so much, so fucking much, that needs to be done to save our planet. If you think that political change is the only thing that will “really” matter to save the planet (it’s obviously going to be a huge factor), and you are so deeply committed to the ideal that the only things worth doing are those which directly further said political change, then you have serious work to do on your messaging strategy because what you had to say here clearly isn’t causing global change.
Alternately, if you think the situation is so impossible that nothing can be done to save it, go find a different void to yell into and stop trying to drag down those of us who still have some hope.
I’m very fond of Jack Frost. It’s as corny and delightfully bizare as one could want from a Russian mythology movie made in 1965 USSR, and the riffs are obviously great.
Ahh yes, Civ IV. From ye olden days, when the dev teams cared about such weird and obsolete ideas as testing the game before release, or creating an interface that tells the player what the fuck is actually happening. Or useable asynchronous multiplayer, or an AI with enough of a clue to play the damn game competently… I could go on.
Some people apparently liked V’s whole “don’t build too many cities, we don’t want to have an actual empire here” deal, which definitely isn’t my thing but does create less micro. But most of the mechanics were reasonable and the UI shared more or less enough info to follow along. They also opened up the code after the final expansion so modders could do some really great things.
IV had a lot of really good ideas, and zero polish. The current version of the game is laden with silly bugs, ride with bizarre balancing choices, and hideously opaque with simple questions like “how much research have I put into this tech”, “how much production overflowed off this completed build”, and “how likely is this unit to kill this other unit, vs simply damaging it.” They haven’t opened up the code to modders, nor have they put any effort into fixing these frankly silly errors themselves.
Civ IV is great because of relatively simple mechanics which allow a lot of interesting choices in how to construct and develop your empire. It accentuates this by getting all the boring stuff right: bugs are few and minor, the interface is communicative, etc. it’s not perfect in either regard, and yet somehow it far exceeds its successors in these simple categories. This is how you make a good turn-based 4X game actually fun, even with 2005 graphics.
And yet, V and VI sold extremely well, and VII seemingly will as well, despite inevitably being a grossly inferior product at release which will be dragged most of the way to a truly finished state over five years of patches and DLC.
I guess this is very “stop having fun meme”, but why the hell are the only games in this genre (of all genres) trading balance, bug fixes, and comprehensible interfaces for fancy graphics? Is it really not profitable to make a game like Civ IV in 2024?
From the Washington Post piece:
But the study doesn’t go so far as to say that Russia had no influence on people who voted for President Donald Trump.
- It doesn’t examine other social media, like the much-larger Facebook.
- Nor does it address Russian hack-and-leak operations. Another major study in 2018 by University of Pennsylvania communications professor Kathleen Hall Jamieson suggested those probably played a significant role in the 2016 race’s outcome.
- Lastly, it doesn’t suggest that foreign influence operations aren’t a threat at all.
And
“Despite these consistent findings, it would be a mistake to conclude that simply because the Russian foreign influence campaign on Twitter was not meaningfully related to individual-level attitudes that other aspects of the campaign did not have any impact on the election, or on faith in American electoral integrity,” the report states.
This is an excellent post.
All posts and comments in Daystrom Institute must be substantive and explain their reasoning. Simply declaring that a season of the show is so bad that it shouldn’t exist is not sufficient.
If you want to point out specific discrepancies and argue that they are a reason to view S2 and S3 and contradictory, that would be appropriate here.
If you have something substantive to share about the episode, please feel free!
I feel compelled to note that being promoted from Ensign (O1) to Lieutenant Commander (O4) would be a triple promotion, skipping both Lieutenant Jr. Grade and Lieutenant.
The typical Vulcan response of passivity but curiosity is going to work perfectly throughout Lower Decks.
It’s just perfect, isn’t it? Hardly a surprise to me given that a huge chunk of the funny parts of “serious” Trek stem from the dry bluntness of characters like Spock, Data, and Odo. Now that T’lyn is here my only question is why it took until the 4th season for someone like her to show up.
There are relatively few direct references to Discovery in Lower Decks. More importantly, you’ll enjoy Lower Decks even if you don’t notice or “get” a handful of references.
Lower Decks isn’t good because it references older shows, it’s good because it’s funny and you care about the characters. There are people out there watching it and loving it with minimal or no prior Trek knowledge.
Grief is complicated, and two years is no time at all to recover from the death of a parent. It makes complete sense that watching something you associate with him would still be painful, and there’s nothing to be gained by forcing it.
Eventually you’ll reach the point where reminders of your father bring up happy feelings, with the pain of losing him still present, but not overwhelming. That won’t happen fast, but you will get there. That’s the time to give TNG another go, and see how it makes you feel.
Hang in there, friend.
I feel compelled to recommend this guide by a long time Daystrom Institute contributor. It does an excellent job identifying episodes as essential, unnecessary but fun, mediocre, or outright bad. A good place to work from if you want a more flexible recommendation of what to try and what not to.
Hello,
Daystrom Institute is a place for serious, in-depth discussion about Star Trek. One-liner jokes and other shallow content are not appropriate here.
First off, it’s clear that the metaphor the writers initially had in mind was a computer storing data. The TNG tech manual is just vague enough to be ambiguous on this point, but very heavily implies a “scan and save a pattern -> destroy the original -> rebuild from the pattern” process. Terminology like “pattern buffer” no doubt comes out of that conception.
It’s also clear that by the end of 90s Trek at least some people with decision making power felt it was really important to explicitly shoot down a lot of the “kill and clone machine” theories about how transporters work, which is why Enterprise in particular is full of counter-evidence. Of course, TNG Realm of Fear was clearly not written by someone with “kill and clone” in mind, and stands as another very strong bit of evidence against that theory. The conflicting intentions make things confusing, but they are not irreconcilable.
My preferred explanation is as follows: When they shift something into subspace, they still need to keep an accurate track of exactly where in subspace everything is (the “pattern”), in addition to preventing whatever extradimensional subspace interference whosamawhatsit from damaging the matter itself. (If you’re familiar with computer programming, the pattern is functionally a huge set of “pointers”, not pointing to a specific piece of computer memory, but a specific point within the non-euclidian topology of subspace.) This pattern is stored in the “pattern buffer”, a computer memory storage unit with an extremely high capacity but which only retains data for a limited time. The transporter then uses this pattern to find the dematerialized transportee in subspace and rematerialize them at the target coordinates, taking great care to ensure that all these trillions of pieces are moved to the correct locations in realspace. These steps can be (and often are) accelerated, with a person beginning to materialize at the target coordinates while still dematerializing on the transporter pad (see TNG Darmok for an example off the top of my head).
The reason you can’t just tell the transporter to make another copy of what’s in the buffer is that although you have a lot of information about whatever you just dematerialized, you only have one copy of the matter in the buffer. If you try to materialize another one you’ll be trying to pull matter from subspace where none exists: the transporter equivalent of a Segmentation Fault, to use another computer science term. If you tried to use that pattern to convert an appropriate quantity of base matter into a copy of whatever was in the buffer, you’ll still be missing any information about the transported material which can’t be gleamed exclusively from a mapping of where each piece was: you won’t necessarily know exactly what every piece was, at a precision necessary to recreate it. Especially if the diffusion of material into subspace is sufficiently predictable that the pattern doesn’t need a pointer for every individual subatomic particle, but can capture a a cluster of particles with each one.
We know from the existence of “transporter traces” that the transport process does leave behind some persistent information about a person who was transported. We also know that it is possible for the transporter operator to identify and deactivate weapons mid-transport. It makes sense that a mapping of pointers could be extrapolated out to get a lot of data about the matter being transported (such as detailed information on a subject’s cellular makeup, or if there’s a device capable of discharging a dangerous amount of energy) while still falling far short of the data required to make an exact copy.
Doctor McCoy used the transporter very frequently with minimal complaining; the only complaint I can recall is from TMP and followed a horrific and unexpected transporter accident.
As for transporters in Enterprise, two things are especially noteworthy: one, they explicitly refuted the idea that the transporter creates a “some sort of weird copy” of the person or object transported, and two, those human-safe transporters were contemporary with very primitive replicator equivalents called protein resequencers. Clearly transporters aren’t building humans atom-by-atom from data alone if they can’t figure out how to do more than resequence protein molecules in any other context.
Transporters don’t do anything to affect the matter they are transporting unless explicitly intended to: by the 24th century they are programmed to filter out recognizeable pathogens, and can be used to deactivate weapons or occasionally monkey with the genes of a person in mid-transport, but things routinely pass through the transporter without issue which are either totally unknown or explicitly non-replicatable. None of this makes sense if the sequence is scan -> destroy -> rebuild, but makes total sense if the transporter is shifting the transportee into subspace (with some tweaks to allow them to exist there) and then back out of subspace at the destination.
Thomas Riker (and now William Boimler) is the one big exception. Both occured under a very specific and extremely rare weather condition, and the first time this happened the Chief Engineer on the flagship of the Enterprise was shocked that such a thing was even possible. I’m much more inclined to believe that the “transporter duplicates” are actually the result of the phenomenon that duplicated Voyager in Deadlock, not the transporter actually constructing two people from the pattern and matter of only one.
A transporter is a device which takes matter, shifts it into subspace, and can do some manipulation of that matter in the process, but can’t reconstruct it arbitrary. Once the transported object has been rematerialized, all the transporter has left is a record of what that matter was at a far lesser precision than what would be needed to replicate it.
A replicator is a transporter designed to shift inert matter into subspace and modify it extensively from that state. A typical replicator is less precise than a transporter and is simultaneously limited by the complexity of its recipes. It cannot produce functional living things, for example.
Transporters and replicators are frequently referred to as matter-energy conversion devices. This is technically true but somewhat deceptive. It’s also a common misconception that a transporter is an advanced replicator, instead of the other way round, but we know this isn’t true: a safe-for-humans Transporter was invented and used in the 22nd century, while the contemporary replicator equivalents were primitive protein resequencers.
How did you do this?