Trustable Tech Mark: Our Theory of Trust

When is it okay to trust a device? What makes a device and its manufacturers trustworthy? How do we evaluate trust for the Trustable Tech mark? Here's our theory of trust, our approach to the sometimes fuzzy concept of trust and trustworthiness.

Trust is a personal decision

First of all, it's important to keep in mind that trusting—or not trusting—is a highly personal decision. The Trustable Tech mark can only ever be one indicator that you might want to rely on, or not: Depending on the circumstances of your life, your mileage may vary.

That said, here's how we go about this.

Our trustmark aims to give the companies that go above and beyond to build trustworthy products a way to demonstrate that they do. That's already a pretty high bar to clear, given the state of the industry right now. So we rely on information provided by the makers of devices to evaluate trustworthiness:

The building blocks of trust

We ask a series of questions to establish credibility in 5 dimensions: Security, Transparency, Privacy & Data Practices, Stability, and Openness.

Screen Shot 2018-11-12 at 08.21.43.png

We believe that the first four of these dimensions are the foundational building blocks of trustworthiness: They aren't sufficient conditions but required ones. Without a strong commitment to security, transparency, data protection and stability (in the sense of designing for robustness and longevity), a connected device can never be trusted.

The fifth dimension, openness, plays a special role: In our view, openness is not a required condition, but openness is a strong indicator for trustworthiness. Concretely, when evaluating incoming applications we look for openness, and if the device is largely open we look at the rest of the application with an assumption of trustworthiness as opposed to an assumption of non-trustworthiness.

Let me explain.

Verification is stronger than trust. So if a device is open sourced, there are tools and mechanisms in place for researchers and the community to verify most of the device maker's claims. But in practice, many device makers aren't able to open source their devices. (There are many industry-related reasons for this, most notably that investors still vastly prefer protectable IP; we don't like that philosophically but it's a reality we decided to work with, and work around.)

So we recognize that open sourcing isn't an option for everyone, and decided that openness is not a required condition of qualifying for the trustmark. However, where openness isn't a given, applicants need to explain their choices and their strategies to ensure trustworthiness.

So does a device have to be open? No. If it's not open, we ask the manufacturer to provide more indepth explanations instead so our evaluators get the full picture.

How do we evaluate

Now, let's look at how we evaluate concretely (or are planning to, as of today; this might still change). Every incoming application is reviewed by our pool of experts. (More on that soon.) The type of information we ask companies to submit ranges from very concrete to slightly more abstract, from easily provable (like a link to a privacy policy document) to what are essentially value statements (like a commitment not ever to pursue legal action against security researchers or tinkerers). To some answers, a clear YES is required, others are optional and help our evaluators put things into context.

We plausibility check those answers: Do the linked documents exist and are they what the applicant claims? Are the answers consistent or mutually exclusive? And most importantly, does the substance of the answers provide a consistent narrative that's in line with our requirements? The last one is where the expertise of our expert reviewers comes into play: To an expert, a baloney answer will stand out right away and raise a flag. It all needs to add up to a consistent picture of best practices and trustworthiness.

Wherever there are inconsistencies or we see gaps, we follow up for clarification. The response, and the way the follow up is handled, gives us another qualitative bit of data to take into account: Is the company responsive? Are they cooperative or hostile? Do they demonstrate good will?

Not a perfect picture, but a pretty detailed one

Taken together, this won't ever give us 100% security that all answers are true, and will stay true. However, this way we have enough data points and input that a pretty detailed picture emerges. If we ever learn about (or suspect) non-compliance or foul play, we'll follow up, and reserve the right to revoke the certification. It's a pretty high touch approach, and we're confident that this will lead to high quality and consistency.

We expect that over time this system will grow more robust, and that we'll gather more insights. We'll keep adjusting the system as we go along, and evolve it accordingly. We'll also build a repository of best practices as we go along, so we'll be able to point new applicants to existing resources and best practices, too. In the end, we want this effort to shape the industry towards more trustworthiness. Education and open communications channels both have an important part to play.