I wonder if they’d mind someone mirroring their content, but with the one difference that anyone can edit, any time with no restrictions, spam blocking, vetting etc
Let’s say everyone used an identity verification service to signup, like had to send photos of their ID and their SSN (national identity number) to be vetted by a third party.
How long after the service got popular would it take for the most aggressive marketers to pay rings of fraudsters to lend their identities and/or make fake reviews?
I think it would definitely start out great until it got big enough to be super useful and then the fraud would ramp up. I think an organization like Consumer Reports has a chance at successfully maintaining a low-bias product database, but the paywall is a big obstacle, as is the fact they’ll only review the largest product catalogs.
These are the pitfalls with the “amazon reviews/yelp” model.
A decent implementation of the Wikipedia/FOSS model sidesteps this because it theoretically is run by opinionated curators. No amount of bots/shills can break the article soft-lock ounce foul play is spotted.
That’s not to say these systems haven’t been occasionally broken through more sophisticated attacks, but empirically it seems clear that the model generally works well enough given enough community engagement (which would be the biggest challenge IMO, because maintainers can’t be expected to buy every product, and reliable primary sources may be hard to come by).
there needs to be a crowd sourced product review and maintenance website that can see trends of enshittification.
From Wikishittia, the free enshittepedia
(it does not exist, sadly)
I wonder if they’d mind someone mirroring their content, but with the one difference that anyone can edit, any time with no restrictions, spam blocking, vetting etc
See what chaos ensues
How dare you get my hopes up.
Let’s say everyone used an identity verification service to signup, like had to send photos of their ID and their SSN (national identity number) to be vetted by a third party.
How long after the service got popular would it take for the most aggressive marketers to pay rings of fraudsters to lend their identities and/or make fake reviews?
I think it would definitely start out great until it got big enough to be super useful and then the fraud would ramp up. I think an organization like Consumer Reports has a chance at successfully maintaining a low-bias product database, but the paywall is a big obstacle, as is the fact they’ll only review the largest product catalogs.
These are the pitfalls with the “amazon reviews/yelp” model.
A decent implementation of the Wikipedia/FOSS model sidesteps this because it theoretically is run by opinionated curators. No amount of bots/shills can break the article soft-lock ounce foul play is spotted.
That’s not to say these systems haven’t been occasionally broken through more sophisticated attacks, but empirically it seems clear that the model generally works well enough given enough community engagement (which would be the biggest challenge IMO, because maintainers can’t be expected to buy every product, and reliable primary sources may be hard to come by).
I mean it more or less follows a line. It getting ever steeper lately but it’s pretty predictable.
The trick is designing the thing in such a way as to resist infiltration by astroturfing marketers.
deleted by creator