And then the AO3 machinery doesn't even process the idea that those users are really mad about Thing X. Which doesn't help with any efforts to get Thing X changed.
Definitely seems like that's the pattern! It's complicated with AO3, I have seen the sort of mythmaking and "wtf that just straight-up isn't true" stuff that gets spread on Tumblr and Reddit (and I imagine dealing with it as a wrangler is beyond frustrating). But at the end of the day, "users think the system works like X and they're mad about it but it doesn't work that way" should always prompt a "huh, why do they think it works this way?", you know? And how to structure that inquiry or even spot when the misalignment is happening is the meat & potatoes of a lot of user research in the software space.
To put a bow on it - I do appreciate your clarifications, as I said, I just also hope that things don't stop at clarification, and that TW leadership's decisions are a little more anchored to how people use the site in the future.
no subject
Definitely seems like that's the pattern! It's complicated with AO3, I have seen the sort of mythmaking and "wtf that just straight-up isn't true" stuff that gets spread on Tumblr and Reddit (and I imagine dealing with it as a wrangler is beyond frustrating). But at the end of the day, "users think the system works like X and they're mad about it but it doesn't work that way" should always prompt a "huh, why do they think it works this way?", you know? And how to structure that inquiry or even spot when the misalignment is happening is the meat & potatoes of a lot of user research in the software space.
To put a bow on it - I do appreciate your clarifications, as I said, I just also hope that things don't stop at clarification, and that TW leadership's decisions are a little more anchored to how people use the site in the future.