• 25 Posts
  • 110 Comments
Joined 1 年前
cake
Cake day: 2023年6月12日

help-circle





  • Ahh yes, Civ IV. From ye olden days, when the dev teams cared about such weird and obsolete ideas as testing the game before release, or creating an interface that tells the player what the fuck is actually happening. Or useable asynchronous multiplayer, or an AI with enough of a clue to play the damn game competently… I could go on.

    Some people apparently liked V’s whole “don’t build too many cities, we don’t want to have an actual empire here” deal, which definitely isn’t my thing but does create less micro. But most of the mechanics were reasonable and the UI shared more or less enough info to follow along. They also opened up the code after the final expansion so modders could do some really great things.

    IV had a lot of really good ideas, and zero polish. The current version of the game is laden with silly bugs, ride with bizarre balancing choices, and hideously opaque with simple questions like “how much research have I put into this tech”, “how much production overflowed off this completed build”, and “how likely is this unit to kill this other unit, vs simply damaging it.” They haven’t opened up the code to modders, nor have they put any effort into fixing these frankly silly errors themselves.

    Civ IV is great because of relatively simple mechanics which allow a lot of interesting choices in how to construct and develop your empire. It accentuates this by getting all the boring stuff right: bugs are few and minor, the interface is communicative, etc. it’s not perfect in either regard, and yet somehow it far exceeds its successors in these simple categories. This is how you make a good turn-based 4X game actually fun, even with 2005 graphics.

    And yet, V and VI sold extremely well, and VII seemingly will as well, despite inevitably being a grossly inferior product at release which will be dragged most of the way to a truly finished state over five years of patches and DLC.

    I guess this is very “stop having fun meme”, but why the hell are the only games in this genre (of all genres) trading balance, bug fixes, and comprehensible interfaces for fancy graphics? Is it really not profitable to make a game like Civ IV in 2024?


  • From the Washington Post piece:

    But the study doesn’t go so far as to say that Russia had no influence on people who voted for President Donald Trump.

    • It doesn’t examine other social media, like the much-larger Facebook.
    • Nor does it address Russian hack-and-leak operations. Another major study in 2018 by University of Pennsylvania communications professor Kathleen Hall Jamieson suggested those probably played a significant role in the 2016 race’s outcome.
    • Lastly, it doesn’t suggest that foreign influence operations aren’t a threat at all.

    And

    “Despite these consistent findings, it would be a mistake to conclude that simply because the Russian foreign influence campaign on Twitter was not meaningfully related to individual-level attitudes that other aspects of the campaign did not have any impact on the election, or on faith in American electoral integrity,” the report states.















  • First off, it’s clear that the metaphor the writers initially had in mind was a computer storing data. The TNG tech manual is just vague enough to be ambiguous on this point, but very heavily implies a “scan and save a pattern -> destroy the original -> rebuild from the pattern” process. Terminology like “pattern buffer” no doubt comes out of that conception.

    It’s also clear that by the end of 90s Trek at least some people with decision making power felt it was really important to explicitly shoot down a lot of the “kill and clone machine” theories about how transporters work, which is why Enterprise in particular is full of counter-evidence. Of course, TNG Realm of Fear was clearly not written by someone with “kill and clone” in mind, and stands as another very strong bit of evidence against that theory. The conflicting intentions make things confusing, but they are not irreconcilable.

    My preferred explanation is as follows: When they shift something into subspace, they still need to keep an accurate track of exactly where in subspace everything is (the “pattern”), in addition to preventing whatever extradimensional subspace interference whosamawhatsit from damaging the matter itself. (If you’re familiar with computer programming, the pattern is functionally a huge set of “pointers”, not pointing to a specific piece of computer memory, but a specific point within the non-euclidian topology of subspace.) This pattern is stored in the “pattern buffer”, a computer memory storage unit with an extremely high capacity but which only retains data for a limited time. The transporter then uses this pattern to find the dematerialized transportee in subspace and rematerialize them at the target coordinates, taking great care to ensure that all these trillions of pieces are moved to the correct locations in realspace. These steps can be (and often are) accelerated, with a person beginning to materialize at the target coordinates while still dematerializing on the transporter pad (see TNG Darmok for an example off the top of my head).

    The reason you can’t just tell the transporter to make another copy of what’s in the buffer is that although you have a lot of information about whatever you just dematerialized, you only have one copy of the matter in the buffer. If you try to materialize another one you’ll be trying to pull matter from subspace where none exists: the transporter equivalent of a Segmentation Fault, to use another computer science term. If you tried to use that pattern to convert an appropriate quantity of base matter into a copy of whatever was in the buffer, you’ll still be missing any information about the transported material which can’t be gleamed exclusively from a mapping of where each piece was: you won’t necessarily know exactly what every piece was, at a precision necessary to recreate it. Especially if the diffusion of material into subspace is sufficiently predictable that the pattern doesn’t need a pointer for every individual subatomic particle, but can capture a a cluster of particles with each one.

    We know from the existence of “transporter traces” that the transport process does leave behind some persistent information about a person who was transported. We also know that it is possible for the transporter operator to identify and deactivate weapons mid-transport. It makes sense that a mapping of pointers could be extrapolated out to get a lot of data about the matter being transported (such as detailed information on a subject’s cellular makeup, or if there’s a device capable of discharging a dangerous amount of energy) while still falling far short of the data required to make an exact copy.


  • Doctor McCoy used the transporter very frequently with minimal complaining; the only complaint I can recall is from TMP and followed a horrific and unexpected transporter accident.

    As for transporters in Enterprise, two things are especially noteworthy: one, they explicitly refuted the idea that the transporter creates a “some sort of weird copy” of the person or object transported, and two, those human-safe transporters were contemporary with very primitive replicator equivalents called protein resequencers. Clearly transporters aren’t building humans atom-by-atom from data alone if they can’t figure out how to do more than resequence protein molecules in any other context.

    Transporters don’t do anything to affect the matter they are transporting unless explicitly intended to: by the 24th century they are programmed to filter out recognizeable pathogens, and can be used to deactivate weapons or occasionally monkey with the genes of a person in mid-transport, but things routinely pass through the transporter without issue which are either totally unknown or explicitly non-replicatable. None of this makes sense if the sequence is scan -> destroy -> rebuild, but makes total sense if the transporter is shifting the transportee into subspace (with some tweaks to allow them to exist there) and then back out of subspace at the destination.

    Thomas Riker (and now William Boimler) is the one big exception. Both occured under a very specific and extremely rare weather condition, and the first time this happened the Chief Engineer on the flagship of the Enterprise was shocked that such a thing was even possible. I’m much more inclined to believe that the “transporter duplicates” are actually the result of the phenomenon that duplicated Voyager in Deadlock, not the transporter actually constructing two people from the pattern and matter of only one.


  • A transporter is a device which takes matter, shifts it into subspace, and can do some manipulation of that matter in the process, but can’t reconstruct it arbitrary. Once the transported object has been rematerialized, all the transporter has left is a record of what that matter was at a far lesser precision than what would be needed to replicate it.

    A replicator is a transporter designed to shift inert matter into subspace and modify it extensively from that state. A typical replicator is less precise than a transporter and is simultaneously limited by the complexity of its recipes. It cannot produce functional living things, for example.

    Transporters and replicators are frequently referred to as matter-energy conversion devices. This is technically true but somewhat deceptive. It’s also a common misconception that a transporter is an advanced replicator, instead of the other way round, but we know this isn’t true: a safe-for-humans Transporter was invented and used in the 22nd century, while the contemporary replicator equivalents were primitive protein resequencers.