When used thoughtfully, AI can enhance user experiences through personalized interactions, increased efficiency, and innovative designs. PatternFly AI provides resources that can help you integrate AI into your design process, balancing a consideration for its potential benefits and unintended consequences.
Red Hat policies
When using PatternFly to design Red Hat products, you must adhere to AI-related policies that Red Hat has previously outlined:
- Gain approval before using AI technology for business related to Red Hat.
- Gain approval before using certain information as input for AI technology.
- Review, test, and validate generative AI model output.
- Always consider data privacy when entering company or personal information into AI resources, and ensure compliance with all company data protection policies and rules around AI usage.
Core principles
There are 5 core principles of PatternFly AI: accountability, explainability, transparency, fairness, and human-centeredness. These principles create an ethics-first framework for AI use, and any AI system built with PatternFly should adhere to all 5.
Ethical design checklist
When working on an AI system, you should consciously check that you're in alignment with the core principles of PatternFly AI. While this is an area that will continue to evolve with the rest of the industry, the following checklists outline some of the key areas that you should consider for each principle.
Accountability
Key area | Rule | Status |
---|---|---|
Policies | Company AI policies are readily accessible to all team members. | |
Legal compliance | All necessary laws, regulations, and ethical guidelines are followed throughout the development process. AI does not enable illegal, unethical, or contract-breaking activities. | |
Practices | AI does not answer unsafe questions or access unsecure data. |
Explainability
Key area | Rule | Status |
---|---|---|
Outcomes | There are clear explanations available that describe how AI conclusions are reached. | |
Citations | Any related citations are provided to users. | |
Context | To support troubleshooting, AI gives context to Red Hatters who review its interactions. |
Transparency
Key area | Rule | Status |
---|---|---|
Documentation | Design processes and decisions are well documented. | |
Data usage | Informed consent is obtained to collect and use data. The ways that user data is collected, stored, and used are openly shared. AI is clear about the data that it records. | |
Confidence | AI shares when it has low confidence in its response. | |
Limitations | AI shares when it believes that it can’t fulfill a request. |
Fairness
Key area | Rule | Status |
---|---|---|
Bias | Potential biases are identified, reduced, and actively studied. | |
Inclusion | Designs are inclusive and accommodating of various user demographics. | |
Equal access | Access to AI technologies is available and beneficial to as many users and communities as possible. |
Human-centeredness
Key area | Rule | Status |
---|---|---|
Value and need | AI is aligned with user needs and values and will be continuously refined based on user feedback and ethical considerations. | |
Communication | AI has a predictable tone and voice. It can handle emotional responses from users gracefully. | |
Cultural sensitivity | Cultural differences are considered and respected. | |
Data rights and control | Users have control over their data, including the ability to access, modify, and delete their information. AI does not act on behalf of users without explicit permission and clear opportunities for permission withdrawal. | |
Optional | There is an obvious and simple way for users to opt out of using AI. | |
Privacy | Personally identifiable information is protected and used responsibly. |
Design guidelines
When designing, developing, and using AI, consider the following ethical and best-practice guidelines.
1. Determine if AI adds value
Not all uses of AI are good for your UX strategy. As much as possible, conduct research to identify real user needs that AI features could help solve
Some of the more common problems that AI might be able to help solve include:
- Increasing users' productivity and efficiency.
- Personalizing user experience to make engagements more personal and relevant.
- Making design processes more sustainable.
When to use AI
Depending on your users' needs, value-adding features could include:
- AI-driven search, to tailor search results to a user's unique needs.
- AI that helps streamline onboarding, data entry, or routine job tasks.
- AI that makes product recommendations based on a user's history.
When not to use AI
- Do not add AI features simply because they are new, trendy, or fun. They need to matter to the user.
2. Enhance—don't replace—human abilities
AI is best when it enhances human abilities, not when it's used to replace humans. It cannot exist in a silo—humans help bring the value of AI to life.
To ensure that the design of AI systems is human-centered, follow these practices:
- Nurture collaboration and cross-team alignment.
- Welcome multiple perspectives to encourage creativity and help mitigate bias.
- Check AI output for accuracy and identify areas where meaning is lost, language isn't inclusive, or information isn't true. Ask peers to review your AI-generated deliverables to help fact-check and catch mistakes.
3. Be transparent with your users
As one of our core pillars, transparency is essential for ethical design with AI.
To help people understand and trust AI features:
- Tell users when AI is being used.
- Make its capabilities and limitations clear to set appropriate expectations.
- Explain how AI makes decisions.
- Keep users in control and let them decide how they interact with AI.
- Be clear and honest when AI fails or hallucinates.
4. Be prepared for something to go wrong
Errors and failure are inevitable when working with AI, so it is essential that you are prepared to handle undesired outcomes. You should understand the risk involved in AI and the impact that an error may have.
To create a plan for issues, start by following these guidelines:
- When AI fails, be explicit about errors and let users regain control as they want.
- Provide easy access to human support.