A structured way to discuss design in open source

Hello,

Personally i’d like to share an idea — maybe a naive one — about whether it’s possible to build a more shared and structured way of discussing UX problems in open source.

We’re a small student group from Nantes, and we’re working on come up with a proposal of establishing a quantitative evaluating system for user experience so that different roles can meaningfully cooperate and negotiate in the open source while doing the study on UX in open source. This idea is still very early, and honestly we’re not even sure if this is a “real project” yet. But below is a small manifesto we wrote to explain the direction we’re thinking about.

Open Experiences Manifesto

The problem we are facing

End-user open source software, as a user-centred product, which should have shared the same fundamental design requirements as proprietary applications, yet still continue to operate under governance models inherited from open platforms, where developers remain the central decision-makers. Which, as a result inherits the unintentionally centralized technology authority caused by “code donating”, let actual users that open source was meant to empower remain structurally excluded from the processes that define their tools.

So why Why is UX still not treated as a first-class system problem to be governed

Experience issues rarely appear as outright failures, but accumulate during use as cognitive effort, navigation friction, and decision burden instead, which are hard to surface and even harder to compare consistently.

So these ux contributions always come from different experience-based perspectives, and this lack of consistency is significantly amplified in open-source contribution environments, increasing communication, negotiation, and decision-making costs.

Our Values and Mission

We are not satisfied with merely granting users the right to run, copy, distribute, study, and modify software. We strive to ensure that users also enjoy the freedom of using software effectively, regardless of one’s technical knowledge or expertise.

We call for a new collaboration among developers and designers and users themselves, establishing a shared quantitative evaluating system for user experience so that different roles can meaningfully cooperate and negotiate.

Every step we take is aimed at reducing the cost of use and narrowing the distance between technology and the people who rely on it.

Our Aim

We are here to propose a shared quantitative evaluating system clarifies:

  • what problems should be addressed first

  • what counts as a real improvement in user experience,

  • which design choices are worth implementing.

enabling UX decisions to be quantified, discussed, compared, and negotiated, forming an auditable chain of evidence that can be traced and reviewed and easily translate users’ feedback into concrete, actionable directions for change.

Call to Action

We would genuinely love to hear thoughts, criticism, or reactions.

I like the manifesto and I wonder how this would look like in practice. I know that some people, including me have been experimenting with making it easier to report issues and workflows for non-experts (e.g. by offering a way for screen-recording a process that is difficult to do).

Thank you very much for move it to another topic.
So the basic idea is a weighted scoring function, but based on a sort of open community — the criteria open to be modified, and honestly there’s a lot naive and inappropriate that’ll need fixing.
The idea runs in two stages. The first is at the prototype level. Part A is just counting errors against existing standards — missing loading states, contrast issues, counting the amouts and severity. Part B is more of a community vote: each criteria(For references :https://urbanmobilitycourses.eu/wp-content/uploads/2021/04/10-Usability-Heuristics-for-User-Interface-Design.pdf) has some written descriptions of what different quality levels actually look like, and people vote on which one matches what they’re seeing. The weights for each criteria are set by people with relevant domain knowledge, but the voting itself is open to everyone.
The second stage is based on user testing. The logic is like if people can’t finish a task or it takes too long, something’s off, aiming at track drop-offs, error rates, conversion, find where the friction is actually hurting the product, and map it back to specific points in the stage one grid.
So that’s basically the ideas