Trust by Design: Visualizing AI Transparency
Interface Design
2025
@
FH-Potsdam
Project type
Bachelor Project
Role
Student

Trust by Design: Visualizing AI Transparency
Interface Design
2025
@
FH-Potsdam
Project type
Bachelor Project
Role
Student

Trust by Design: Visualizing AI Transparency
Interface Design
2025
@
FH-Potsdam
Project type
Bachelor Project
Role
Student

Artificial Intelligence (AI) has become deeply embedded in contemporary life, offering sophisticated assistance across a broad spectrum of daily tasks.
Artificial Intelligence (AI) has become deeply embedded in contemporary life, offering sophisticated assistance across a broad spectrum of daily tasks.
These practical applications are manifold and include generating automatic suggestions in chat.
These practical applications are manifold and include generating automatic suggestions in chat.
Optimizing routes in navigation systems.
Optimizing routes in navigation systems.
Providing intelligent appointment suggestions for calendars.
Providing intelligent appointment suggestions for calendars.
Performing the automated sorting of images.
Performing the automated sorting of images.
Prioritizing and partially
pre-formulating emails.
Prioritizing and partially
pre-formulating emails.
And utilizing personalized algorithms to generate suitable music playlists.
And utilizing personalized algorithms to generate suitable music playlists.
However, all these applications share a critical characteristic: the decision-making process remains opaque and cannot be easily traced.
My work demonstrates how we can make precisely these processes visible and traceable within the user interface.
However, all these applications share a critical characteristic: the decision-making process remains opaque and cannot be easily traced.
My work demonstrates how we can make precisely these processes visible and traceable within the user interface.
Artificial Intelligence (AI) has become deeply embedded in contemporary life, offering sophisticated assistance across a broad spectrum of daily tasks.
Artificial Intelligence (AI) has become deeply embedded in contemporary life, offering sophisticated assistance across a broad spectrum of daily tasks.
These practical applications are manifold and include generating automatic suggestions in chat.
These practical applications are manifold and include generating automatic suggestions in chat.
Optimizing routes in navigation systems.
Optimizing routes in navigation systems.
Providing intelligent appointment suggestions for calendars.
Providing intelligent appointment suggestions for calendars.
Performing the automated sorting of images.
Performing the automated sorting of images.
Prioritizing and partially pre-formulating emails.
Prioritizing and partially pre-formulating emails.
And utilizing personalized algorithms to generate suitable music playlists.
And utilizing personalized algorithms to generate suitable music playlists.
However, all these applications share a critical characteristic: the decision-making process remains opaque and cannot be easily traced.
My work demonstrates how we can make precisely these processes visible and traceable within the user interface.
However, all these applications share a critical characteristic: the decision-making process remains opaque and cannot be easily traced.
My work demonstrates how we can make precisely these processes visible and traceable within the user interface.
Project Goal
Artificial Intelligence (AI) already permeates numerous areas of life and assists us with various tasks, as we have seen. However, all these applications share one critical characteristic: the decision-making process cannot be easily traced (or comprehended). AI frequently appears as a Black Box: inputs go in, decisions come out, the pathway remains concealed.
Project Goal
Artificial Intelligence (AI) already permeates numerous areas of life and assists us with various tasks, as we have seen. However, all these applications share one critical characteristic: the decision-making process cannot be easily traced (or comprehended). AI frequently appears as a Black Box: inputs go in, decisions come out, the pathway remains concealed.
Project Goal
Artificial Intelligence (AI) already permeates numerous areas of life and assists us with various tasks, as we have seen. However, all these applications share one critical characteristic: the decision-making process cannot be easily traced (or comprehended). AI frequently appears as a Black Box: inputs go in, decisions come out, the pathway remains concealed.
Project Objective
My work pursues the goal of developing a user-centered design for the iOS Reminders app that makes the AI suggestions transparent, understandable, and influenceable.
Furthermore, my central aim is to fundamentally solve the Black Box problem: The core objective is to transform the opaque system into a fully transparent one, thereby making all AI decisions visible and traceable to the user.
Project Objective
My work pursues the goal of developing a user-centered design for the iOS Reminders app that makes the AI suggestions transparent, understandable, and influenceable.
Furthermore, my central aim is to fundamentally solve the Black Box problem: The core objective is to transform the opaque system into a fully transparent one, thereby making all AI decisions visible and traceable to the user.
Project Objective
My work pursues the goal of developing a user-centered design for the iOS Reminders app that makes the AI suggestions transparent, understandable, and influenceable.
Furthermore, my central aim is to fundamentally solve the Black Box problem: The core objective is to transform the opaque system into a fully transparent one, thereby making all AI decisions visible and traceable to the user.
Core Design Principles (XAI Requirements)
User Focus: Placing end-user needs at the core of the design process.
Transparency: Revealing the AI's decision process through specific visual cues.
Traceability: Providing clear explanations (why and how) a suggestion was generated.
Influenceability: Allowing users to adjust parameters and directly steer the AI's output.
Core Design Principles (XAI Requirements)
User Focus: Placing end-user needs at the core of the design process.
Transparency: Revealing the AI's decision process through specific visual cues.
Traceability: Providing clear explanations (why and how) a suggestion was generated.
Influenceability: Allowing users to adjust parameters and directly steer the AI's output.
Core Design Principles (XAI Requirements)
User Focus: Placing end-user needs at the core of the design process.
Transparency: Revealing the AI's decision process through specific visual cues.
Traceability: Providing clear explanations (why and how) a suggestion was generated.
Influenceability: Allowing users to adjust parameters and directly steer the AI's output.
Methodology
The practical implementation and empirical validation of these four core principles required a structured, user-centered approach. The methodical process that led to the creation and validation of the prototype is detailed in the following six sequential steps:
Methodology Steps:
Research (Theoretical Foundation): This phase involved a detailed analysis of current Human-Computer Interaction (HCI) and Explainable AI (XAI) approaches to establish the necessary theoretical framework.
Wireframes: Creation of low-fidelity layouts to quickly define the basic structure and information architecture of the proposed interface solution.
Online Survey (Quantitative Validation): A quantitative assessment (n=18) was conducted to systematically gather data on user preferences and requirements, directly informing the detailed design work.
Clickable Prototype: Development of a high-fidelity prototype, strictly adhering to the Apple Human Interface Guidelines (HIG), to ensure a realistic user experience for testing.
Usability Tests (Empirical Evaluation): Rigorous evaluation of the prototype was performed in a practice-oriented setting using the Thinking Aloud protocol to capture qualitative user feedback.
Final Prototype: Integration of all empirical findings and necessary refinement to conclude the design process with a robust, validated solution.
Methodology
The practical implementation and empirical validation of these four core principles required a structured, user-centered approach. The methodical process that led to the creation and validation of the prototype is detailed in the following six sequential steps:
Methodology Steps:
Research (Theoretical Foundation): This phase involved a detailed analysis of current Human-Computer Interaction (HCI) and Explainable AI (XAI) approaches to establish the necessary theoretical framework.
Wireframes: Creation of low-fidelity layouts to quickly define the basic structure and information architecture of the proposed interface solution.
Online Survey (Quantitative Validation): A quantitative assessment (n=18) was conducted to systematically gather data on user preferences and requirements, directly informing the detailed design work.
Clickable Prototype: Development of a high-fidelity prototype, strictly adhering to the Apple Human Interface Guidelines (HIG), to ensure a realistic user experience for testing.
Usability Tests (Empirical Evaluation): Rigorous evaluation of the prototype was performed in a practice-oriented setting using the Thinking Aloud protocol to capture qualitative user feedback.
Final Prototype: Integration of all empirical findings and necessary refinement to conclude the design process with a robust, validated solution.
Methodology
The practical implementation and empirical validation of these four core principles required a structured, user-centered approach. The methodical process that led to the creation and validation of the prototype is detailed in the following six sequential steps:
Methodology Steps:
Research (Theoretical Foundation): This phase involved a detailed analysis of current Human-Computer Interaction (HCI) and Explainable AI (XAI) approaches to establish the necessary theoretical framework.
Wireframes: Creation of low-fidelity layouts to quickly define the basic structure and information architecture of the proposed interface solution.
Online Survey (Quantitative Validation): A quantitative assessment (n=18) was conducted to systematically gather data on user preferences and requirements, directly informing the detailed design work.
Clickable Prototype: Development of a high-fidelity prototype, strictly adhering to the Apple Human Interface Guidelines (HIG), to ensure a realistic user experience for testing.
Usability Tests (Empirical Evaluation): Rigorous evaluation of the prototype was performed in a practice-oriented setting using the Thinking Aloud protocol to capture qualitative user feedback.
Final Prototype: Integration of all empirical findings and necessary refinement to conclude the design process with a robust, validated solution.
Online Survey
The empirical data gathered from the Online Survey (n=18, age 26–45) strongly validates the core hypotheses regarding the necessity of explainable AI (XAI). The results clearly indicate that user acceptance is critically dependent on the level of transparency provided.
The key quantitative findings are summarized as follows:
Importance of Explanations: The necessity of receiving explanations was rated highly (Ø 4.22/5)
Acceptance without Explanation: Acceptance dropped significantly when explanations were absent, scoring only Ø 1.94/5
Source Citation: The relevance of knowing the source of the data/suggestion scored the highest (Ø 4.39/5)
Controllability: 88.9% of participants found the ability to set their own rules essential
In conclusion, these figures substantiate that Transparency and Controllability are the indispensable keys to establishing trustworthy and accepted AI interfaces.
Based on these crucial quantitative findings, all derived insights were directly integrated into the creation and refinement of the prototype design presented in the following chapter.
Online Survey
The empirical data gathered from the Online Survey (n=18, age 26–45) strongly validates the core hypotheses regarding the necessity of explainable AI (XAI). The results clearly indicate that user acceptance is critically dependent on the level of transparency provided.
The key quantitative findings are summarized as follows:
Importance of Explanations: The necessity of receiving explanations was rated highly (Ø 4.22/5)
Acceptance without Explanation: Acceptance dropped significantly when explanations were absent, scoring only Ø 1.94/5
Source Citation: The relevance of knowing the source of the data/suggestion scored the highest (Ø 4.39/5)
Controllability: 88.9% of participants found the ability to set their own rules essential
In conclusion, these figures substantiate that Transparency and Controllability are the indispensable keys to establishing trustworthy and accepted AI interfaces.
Based on these crucial quantitative findings, all derived insights were directly integrated into the creation and refinement of the prototype design presented in the following chapter.
Online Survey
The empirical data gathered from the Online Survey (n=18, age 26–45) strongly validates the core hypotheses regarding the necessity of explainable AI (XAI). The results clearly indicate that user acceptance is critically dependent on the level of transparency provided.
The key quantitative findings are summarized as follows:
Importance of Explanations: The necessity of receiving explanations was rated highly (Ø 4.22/5)
Acceptance without Explanation: Acceptance dropped significantly when explanations were absent, scoring only Ø 1.94/5
Source Citation: The relevance of knowing the source of the data/suggestion scored the highest (Ø 4.39/5)
Controllability: 88.9% of participants found the ability to set their own rules essential
In conclusion, these figures substantiate that Transparency and Controllability are the indispensable keys to establishing trustworthy and accepted AI interfaces.
Based on these crucial quantitative findings, all derived insights were directly integrated into the creation and refinement of the prototype design presented in the following chapter.
Screen Design: Implementation of XAI
In this section, the implementation of the Explainable AI (XAI) elements within the Apple Reminders app will be explained. Specifically, this chapter details the Screen Design and interface concepts developed to systematically integrate the four Core Design Principles—Transparency, Traceability, Influenceability, and User Focus—into the application's interface, thereby demonstrating the practical design solution for the "Black Box" problem.
Screen Design: Implementation of XAI
In this section, the implementation of the Explainable AI (XAI) elements within the Apple Reminders app will be explained. Specifically, this chapter details the Screen Design and interface concepts developed to systematically integrate the four Core Design Principles—Transparency, Traceability, Influenceability, and User Focus—into the application's interface, thereby demonstrating the practical design solution for the "Black Box" problem.
Screen Design: Implementation of XAI
In this section, the implementation of the Explainable AI (XAI) elements within the Apple Reminders app will be explained. Specifically, this chapter details the Screen Design and interface concepts developed to systematically integrate the four Core Design Principles—Transparency, Traceability, Influenceability, and User Focus—into the application's interface, thereby demonstrating the practical design solution for the "Black Box" problem.
Visual Anchoring of Suggestions
AI suggestions appear in the day view as separate, color-coded inline boxes directly next to the relevant task (using the familiar Apple AI icon). Horizontal lines structure the suggestions relating to appointments. Affected appointments are also highlighted in color to immediately identify the AI recommendation. Users can quickly adopt the suggestions with a single "Accept" tap or reject them directly via a cross, all without leaving the current view. This ensures clarity and full control over their to-dos at all times.
Visual Anchoring of Suggestions
AI suggestions appear in the day view as separate, color-coded inline boxes directly next to the relevant task (using the familiar Apple AI icon). Horizontal lines structure the suggestions relating to appointments. Affected appointments are also highlighted in color to immediately identify the AI recommendation. Users can quickly adopt the suggestions with a single "Accept" tap or reject them directly via a cross, all without leaving the current view. This ensures clarity and full control over their to-dos at all times.
Visual Anchoring of Suggestions
AI suggestions appear in the day view as separate, color-coded inline boxes directly next to the relevant task (using the familiar Apple AI icon). Horizontal lines structure the suggestions relating to appointments. Affected appointments are also highlighted in color to immediately identify the AI recommendation. Users can quickly adopt the suggestions with a single "Accept" tap or reject them directly via a cross, all without leaving the current view. This ensures clarity and full control over their to-dos at all times.



Explanatory Texts and Contextual Reference
An explanatory text in the suggestion box ("Why am I seeing this?") creates transparency about the displayed suggestion. Tapping on it opens a Bottom Sheet (typical iOS pattern) that clearly lists all proposed adjustments and their justifications. The justifications are supplemented by icons, which facilitates visual categorization. Additionally, the Bottom Sheet offers context-specific action options, such as sending messages or viewing the original email, to ensure traceability.
Explanatory Texts and Contextual Reference
An explanatory text in the suggestion box ("Why am I seeing this?") creates transparency about the displayed suggestion. Tapping on it opens a Bottom Sheet (typical iOS pattern) that clearly lists all proposed adjustments and their justifications. The justifications are supplemented by icons, which facilitates visual categorization. Additionally, the Bottom Sheet offers context-specific action options, such as sending messages or viewing the original email, to ensure traceability.
Explanatory Texts and Contextual Reference
An explanatory text in the suggestion box ("Why am I seeing this?") creates transparency about the displayed suggestion. Tapping on it opens a Bottom Sheet (typical iOS pattern) that clearly lists all proposed adjustments and their justifications. The justifications are supplemented by icons, which facilitates visual categorization. Additionally, the Bottom Sheet offers context-specific action options, such as sending messages or viewing the original email, to ensure traceability.



Control Options and Personalization
Via the "Data" segment in the Bottom Sheet, users can transparently view which data (Calendar, Location, Email, etc.) was used for the AI decision. These data accesses can be directly enabled or disabled using a switch. This transparency and control option foster trust and a positive user experience.
Control Options and Personalization
Via the "Data" segment in the Bottom Sheet, users can transparently view which data (Calendar, Location, Email, etc.) was used for the AI decision. These data accesses can be directly enabled or disabled using a switch. This transparency and control option foster trust and a positive user experience.
Control Options and Personalization
Via the "Data" segment in the Bottom Sheet, users can transparently view which data (Calendar, Location, Email, etc.) was used for the AI decision. These data accesses can be directly enabled or disabled using a switch. This transparency and control option foster trust and a positive user experience.



Feedback and Undo Function
After accepting a suggestion, a modal window appears that lets users choose whether to inform contacts about the schedule change themselves or delegate this task to the AI. This ensures control over the degree of automation. The action is then confirmed by a short overlay displaying the appointment and time.
Feedback and Undo Function
After accepting a suggestion, a modal window appears that lets users choose whether to inform contacts about the schedule change themselves or delegate this task to the AI. This ensures control over the degree of automation. The action is then confirmed by a short overlay displaying the appointment and time.
Feedback and Undo Function
After accepting a suggestion, a modal window appears that lets users choose whether to inform contacts about the schedule change themselves or delegate this task to the AI. This ensures control over the degree of automation. The action is then confirmed by a short overlay displaying the appointment and time.



As soon as a change is confirmed, the day list updates immediately. Affected tasks retain the Apple AI icon and a characteristic color nuance to make the applied adjustments easily traceable in the day view.
As soon as a change is confirmed, the day list updates immediately. Affected tasks retain the Apple AI icon and a characteristic color nuance to make the applied adjustments easily traceable in the day view.
As soon as a change is confirmed, the day list updates immediately. Affected tasks retain the Apple AI icon and a characteristic color nuance to make the applied adjustments easily traceable in the day view.



Users can undo changes or provide feedback at any time via clearly labeled buttons or swipe options. Using a Quick Access function on the task, a suggestion can be undone with a tap, the justification viewed again, or direct feedback provided. This feedback helps to specifically train the system further to the user's individual needs and preferences.
Users can undo changes or provide feedback at any time via clearly labeled buttons or swipe options. Using a Quick Access function on the task, a suggestion can be undone with a tap, the justification viewed again, or direct feedback provided. This feedback helps to specifically train the system further to the user's individual needs and preferences.
Users can undo changes or provide feedback at any time via clearly labeled buttons or swipe options. Using a Quick Access function on the task, a suggestion can be undone with a tap, the justification viewed again, or direct feedback provided. This feedback helps to specifically train the system further to the user's individual needs and preferences.



Conclusion of the Screen Design
The combination of explanatory texts, icons, badges, and context-based interactions makes automation transparent. Users can understand, influence, or revoke decisions at any time. This interplay promotes trust and facilitates the integration of explainable AI into daily digital life.
Conclusion of the Screen Design
The combination of explanatory texts, icons, badges, and context-based interactions makes automation transparent. Users can understand, influence, or revoke decisions at any time. This interplay promotes trust and facilitates the integration of explainable AI into daily digital life.
Conclusion of the Screen Design
The combination of explanatory texts, icons, badges, and context-based interactions makes automation transparent. Users can understand, influence, or revoke decisions at any time. This interplay promotes trust and facilitates the integration of explainable AI into daily digital life.
Reflection & Future Outlook
Summary of Achievement
This thesis successfully delivered a user-centered design concept for the iOS Reminders app that transforms AI suggestions from a "Black Box" into a transparent, traceable, and influenceable feature. The empirical data strongly validated the hypothesis that user trust is contingent upon the provision of explanations and control, forming the foundation for the finalized screen design.
Key Design Learnings & Future Outlook
My work demonstrates how Explainable AI can help structure everyday life and strengthen trust in digital systems. However, the potential is far from being exhausted.
Future Adaptations: In the future, new data sources and adaptive functions could be added. The critical question remains: how can users maintain control without losing transparency or data protection?
Societal Value: In the long term, XAI offers not only individual but also societal added value—for a more conscious, inclusive, and fairer digital participation.
Limitations & Research Constraints
Despite the robust methodology, the study was subject to several practical limitations:
Test Group Size: The test group for the Usability Tests was limited to only four participants.
Technical Simulation: The final prototype was a technical simulation only, meaning deep, real-time automation was not feasible.
Ethical Scope: Ethical questions regarding data privacy and long-term trustworthiness remain central but were not addressed within the timeframe of this thesis.
Reflection & Future Outlook
Summary of Achievement
This thesis successfully delivered a user-centered design concept for the iOS Reminders app that transforms AI suggestions from a "Black Box" into a transparent, traceable, and influenceable feature. The empirical data strongly validated the hypothesis that user trust is contingent upon the provision of explanations and control, forming the foundation for the finalized screen design.
Key Design Learnings & Future Outlook
My work demonstrates how Explainable AI can help structure everyday life and strengthen trust in digital systems. However, the potential is far from being exhausted.
Future Adaptations: In the future, new data sources and adaptive functions could be added. The critical question remains: how can users maintain control without losing transparency or data protection?
Societal Value: In the long term, XAI offers not only individual but also societal added value—for a more conscious, inclusive, and fairer digital participation.
Limitations & Research Constraints
Despite the robust methodology, the study was subject to several practical limitations:
Test Group Size: The test group for the Usability Tests was limited to only four participants.
Technical Simulation: The final prototype was a technical simulation only, meaning deep, real-time automation was not feasible.
Ethical Scope: Ethical questions regarding data privacy and long-term trustworthiness remain central but were not addressed within the timeframe of this thesis.
Reflection & Future Outlook
Summary of Achievement
This thesis successfully delivered a user-centered design concept for the iOS Reminders app that transforms AI suggestions from a "Black Box" into a transparent, traceable, and influenceable feature. The empirical data strongly validated the hypothesis that user trust is contingent upon the provision of explanations and control, forming the foundation for the finalized screen design.
Key Design Learnings & Future Outlook
My work demonstrates how Explainable AI can help structure everyday life and strengthen trust in digital systems. However, the potential is far from being exhausted.
Future Adaptations: In the future, new data sources and adaptive functions could be added. The critical question remains: how can users maintain control without losing transparency or data protection?
Societal Value: In the long term, XAI offers not only individual but also societal added value—for a more conscious, inclusive, and fairer digital participation.
Limitations & Research Constraints
Despite the robust methodology, the study was subject to several practical limitations:
Test Group Size: The test group for the Usability Tests was limited to only four participants.
Technical Simulation: The final prototype was a technical simulation only, meaning deep, real-time automation was not feasible.
Ethical Scope: Ethical questions regarding data privacy and long-term trustworthiness remain central but were not addressed within the timeframe of this thesis.
Acknowledgements
In closing this thesis, I would like to sincerely thank my parents for their support and patience throughout my entire studies.
My gratitude also goes to my fellow students and friends who provided honest feedback and motivation during the time of the Bachelor's thesis.
My special thanks also goes to my supervisors and the team at Studio GOOD
for their valuable insights, patience, and the trust they placed in my work.
Acknowledgements
In closing this thesis, I would like to sincerely thank my parents for their support and patience throughout my entire studies.
My gratitude also goes to my fellow students and friends who provided honest feedback and motivation during the time of the Bachelor's thesis.
My special thanks also goes to my supervisors and the team at Studio GOOD
for their valuable insights, patience, and the trust they placed in my work.
Acknowledgements
In closing this thesis, I would like to sincerely thank my parents for their support and patience throughout my entire studies.
My gratitude also goes to my fellow students and friends who provided honest feedback and motivation during the time of the Bachelor's thesis.
My special thanks also goes to my supervisors and the team at Studio GOOD
for their valuable insights, patience, and the trust they placed in my work.


