AI UX the Design Problem

AI UX the Design Problem

By Jeff Billings

Without understanding how AI impacts user experience, it is at best a blind guess as to what any design decision made by a UX designer will achieve. This issue has been observed firsthand.

In particle physics the observer effect is well understood. The act of observation creates uncertainty. Why is this relevant? AI observes and interacts with people, who are diverse like nature itself. Therefore, how we react to AI will always have an inescapable error rate. In cases where there are large numbers of cause-and-effect patterns, neural networks can incorporate these patterns into their internal functions to solve a given problem.

Issue 1: Initial Learning Data

The first is that initial AI learning uses data where the AI itself is not part of the observed environment. In some cases, this doesn’t affect the outcome. Such as, if the variables describing the user experience are unrelated to the choices the user can make. Where the sensitivity of user action is low to the outcome (usually in trivial or obvious cases) then the AI will likely create a low error rate when achieving a desired objective. In highly interactive user experiences, where the user’s behavior and choices directly impact the AI objective function the error rate can be noisy at best and is often useless.

To mitigate this issue, AI science uses a Cold Start solution where the AI is integrated into the environment and learns to solve its objective as part of the User Experience. The Cold Start technique minimizes the error rate caused by the presence of the AI and the error rate of UX design. Cold Start AI sciences eliminate degradation of effect caused by the environment variables not matching the AI’s internal model. In mathematics, this type of error is known as basis confusion.

In summary, AI that isn’t part of the User Experience is unable to model its own effect on results, leading to an unavoidable error.

Issue 2: Change Over Time

To address the erosion of accuracy that starts from the first modelling of a problem involving users, neural networks need to retrain once they are part of the environment. AI designers must correctly account for recursive reinforcement and other feedback problems distorting the models.

Otherwise, an AI which reinforces an error can become delusional. In the case where vast amounts of data exist to train upon, the effect of these problems can be neglectable. But that is rarely the case for small and mid-sized businesses which need AI to compete. Some companies may substitute data from open sources which they believe is a proxy for the actual data to create a useful training data set so the AI can learn how to meet its objective. That process of data curation and synthesis relies very heavily on the skill of the data science team – their skill and the complication of the problem will determine the error rate.

Teams of excellent AI scientists are rare and expensive resulting in inaccessibility of needed talent to apply AI to problems where the data is sparse, and the problem is complex. For UX Designers the AI can thwart their best work by rendering it ineffective because the AI integrated into the system is working in an unforeseen manner based on a UX design that may have changed, leading to errors and unforeseen effects.

A Real-World Example

To illustrate this point, we address a UX design problem observed firsthand. Client “R” integrated AI and wanted the AI interaction UX to be placed where customers were not interacting with it. In measurable terms the user met and interacted with the AI only in 2% of the total traffic on the site. As a result, the AI was unable to contribute to the site, making it a waste of money. Using AI to help you succeed requires understanding the user interaction rate and the data presentation.

To explain why this was a flawed integration let us assume that the site generates $100K in revenue a month and the AI objective function can increase revenue by 25%, leading to a new revenue target of $100K + ($100K * 25%) … except the UX has interfered with the lift. The actual formula is revenue + (revenue * lift * interaction rate) or $100,500 a month… not worth the cost of the AI product or the time and trouble to deploy it.

To fully illustrate this problem, let us assume the AI costs $100 a month to use and the business has an average margin of 10%. The margin on $500 of sales at 10% is $50 available to pay a $100 expense. If the user experience had raised the AI interaction rate to 25% of traffic, then the situation would be quite different. At a 25% interaction rate the new revenue would be $106,250 leaving $6250 of new revenue at a margin of 10% would be $625 – (variable cost of AI) = $525 new margin. Even with poor UX design where the AI interaction rate is as low as 25% of the traffic, it can contribute to a business’s growth. Capable AI adequately integrated into the user experience can create significant business growth. Using Recommendation Success as the measurement, which requires a user to select and complete the objective function in one session, which is as close to unambiguous proof of AI Lift as it is possible to measure, we have the following real-world numbers:

Client “G” has an AI interaction rate in the UX of 20.2%. With a monthly revenue of $28,110 the AI added $1,396 of Recommendation Success and a total actual revenue increase of $9,113 including knock-on-effects. The variable cost of the Cold Start AI was $31.90. The AI used in this case learned from a cold start. And was reinforced with continuous learning. The UX design was typical, however at a 22% interaction rate of site traffic the business case for using AI to complete an objective was clearly yes.

Handling Change

Most UX applications evolve over time. Change can be deadly – unless the AI used can learn at least daily. When a user interaction pattern changes due to data changes or UX implementations changing, all answers increase in error. Mitigate this by choosing an AI capable of training in an unsupervised manner to account for changes. The greater divergence between the data’s completeness with training examples the greater the AI error. Ultimately, the value is only as good as the model. If the model fidelity is high – being the model simulates realities – the value will be useful or even essential.

Comments are closed.