Abstract

Some automated accessibility testing tools analyze screens in isolation and can fail to detect complex, journey-level barriers that emerge over multiple interactions. A system can analyze the temporal experience of an assistive technology (AT) user by programmatically orchestrating a user journey and capturing a sequence of states, which may include the accessibility tree and focus state at each step. This sequential data can then be provided to a large language model. The model can be prompted to simulate a screen reader user's experience, evaluating the flow for issues related to focus expectation, context retention, and navigation consistency. This approach may help identify accessibility barriers like context loss or navigational inconsistency that might not be found by static analysis, potentially improving the usability of multi-step digital flows for AT users.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Share

COinS