1. Design for utility first, companionship second
82% of participants use AI for information research, work tasks, and learning. Emotional or companionship-oriented use barely registers. That shows where users are today, not where speculative features want them to be.
If you're building an AI product, nail practical utility before layering on emotional intelligence and personification. Users are not ready to be emotionally dependent on AI, and pushing that framing too early can create friction and distrust toward the product.
2. Familiarity is your most powerful trust-building tool
Confidence and comfort using AI both showed a positive association with trust: the more people use it, the more they trust it. Trust is not built through UI copy that says "you can trust us." It is built through repeated, low-stakes interactions that go well.
Design onboarding to create small wins early. Be transparent about the Gulf of Execution — close the gap between what users expect your tool to do and what it actually does. A small user base that returns consistently is better than a larger one that leaves after failed outcomes.
Each successful interaction is a trust deposit. Start building.
3. Transparency beats reassurance when it comes to risk
Highly confident AI users still reported high risk concern across the full scale. Familiarity does not reduce perceived risk. If your product adds a "don't worry, AI is safe" banner, it likely will not do much. What you can control is transparency about what the AI is doing, why, and when it is uncertain. Users respect honesty more than false reassurance.
Some users are just here to prompt, get what they need, and leave — but offer transparency options. Show a glimpse of the black box without overwhelming people with information.
4. AI is a supplement, not a substitute for human connection
Even participants reporting high loneliness or low relationship satisfaction did not turn to AI for emotional support, and they rated AI interactions as significantly less meaningful than human ones.
There is a subtle compensatory pattern — lower relationship satisfaction correlates slightly with more AI companionship use — but it is weak. Design AI products that point users back toward human connection. Products that try to replace human relationships will hit a ceiling.
When designing AI products that are more human-facing and personified, be mindful of vulnerable users and build safety nets into the AI.
5. Stop over-segmenting by demographics
Trust, comfort, companionship attitudes, and risk perception were remarkably consistent across age groups, genders, and countries. The data suggests current AI attitudes are broadly shared, not a generational or cultural outlier. You do not need radically different experiences for different demographics. Focus on shared psychology, interaction quality, and accessibility.
Still, this data comes mainly from the UK, US, and Canada. Other cultures may have different attitudes.
6. Users know AI shapes their thinking — design for that awareness
Participants reported AI's influence on decision-making at meaningful levels. People are aware that AI nudges how they reason. Make the AI's reasoning legible. Give users clear ways to override suggestions.
Build products that help people articulate their ideas — work toward human augmentation, not replacement. Products that feel like a black box will erode trust faster than products that show their work, even imperfectly.
If you read until the end, thank you so much for reading <3 you are my hero
All data belongs to Human Clarity Institute, thank you for sharing open source data.
-Ege