We believe that the first four of these dimensions are the foundational building blocks of trustworthiness: They aren't sufficient conditions but required ones. Without a strong commitment to security, transparency, data protection and stability (in the sense of designing for robustness and longevity), a connected device can never be trusted.
The fifth dimension, openness, plays a special role: In our view, openness is not a required condition, but openness is a strong indicator for trustworthiness. Concretely, when evaluating incoming applications we look for openness, and if the device is largely open we look at the rest of the application with an assumption of trustworthiness as opposed to an assumption of non-trustworthiness.
Let me explain.
Verification is stronger than trust. So if a device is open sourced, there are tools and mechanisms in place for researchers and the community to verify most of the device maker's claims. But in practice, many device makers aren't able to open source their devices. (There are many industry-related reasons for this, most notably that investors still vastly prefer protectable IP; we don't like that philosophically but it's a reality we decided to work with, and work around.)
So we recognize that open sourcing isn't an option for everyone, and decided that openness is not a required condition of qualifying for the trustmark. However, where openness isn't a given, applicants need to explain their choices and their strategies to ensure trustworthiness.
So does a device have to be open? No. If it's not open, we ask the manufacturer to provide more indepth explanations instead so our evaluators get the full picture.
How do we evaluate
We plausibility check those answers: Do the linked documents exist and are they what the applicant claims? Are the answers consistent or mutually exclusive? And most importantly, does the substance of the answers provide a consistent narrative that's in line with our requirements? The last one is where the expertise of our expert reviewers comes into play: To an expert, a baloney answer will stand out right away and raise a flag. It all needs to add up to a consistent picture of best practices and trustworthiness.
Wherever there are inconsistencies or we see gaps, we follow up for clarification. The response, and the way the follow up is handled, gives us another qualitative bit of data to take into account: Is the company responsive? Are they cooperative or hostile? Do they demonstrate good will?
Not a perfect picture, but a pretty detailed one
Taken together, this won't ever give us 100% security that all answers are true, and will stay true. However, this way we have enough data points and input that a pretty detailed picture emerges. If we ever learn about (or suspect) non-compliance or foul play, we'll follow up, and reserve the right to revoke the certification. It's a pretty high touch approach, and we're confident that this will lead to high quality and consistency.
We expect that over time this system will grow more robust, and that we'll gather more insights. We'll keep adjusting the system as we go along, and evolve it accordingly. We'll also build a repository of best practices as we go along, so we'll be able to point new applicants to existing resources and best practices, too. In the end, we want this effort to shape the industry towards more trustworthiness. Education and open communications channels both have an important part to play.