Show simple item record

dc.contributor.authorLiu, Xiaoxuan
dc.contributor.authorGlocker, Ben
dc.contributor.authorMcCradden, Melissa M
dc.contributor.authorGhassemi, Marzyeh
dc.contributor.authorDenniston, Alastair K
dc.contributor.authorOakden-Rayner, Lauren
dc.date.accessioned2024-11-18T12:02:46Z
dc.date.available2024-11-18T12:02:46Z
dc.date.issued2022-04-05
dc.identifier.citationLiu X, Glocker B, McCradden MM, Ghassemi M, Denniston AK, Oakden-Rayner L. The medical algorithmic audit. Lancet Digit Health. 2022 May;4(5):e384-e397. doi: 10.1016/S2589-7500(22)00003-6. Epub 2022 Apr 5. Erratum in: Lancet Digit Health. 2022 Jun;4(6):e405. doi: 10.1016/S2589-7500(22)00089-9en_US
dc.identifier.eissn2589-7500
dc.identifier.doi10.1016/S2589-7500(22)00003-6
dc.identifier.pmid35396183
dc.identifier.urihttp://hdl.handle.net/20.500.14200/6554
dc.description.abstractArtificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. We propose a medical algorithmic audit framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences. We suggest several approaches for testing algorithmic errors, including exploratory error analysis, subgroup testing, and adversarial testing, and provide examples from our own work and previous studies. The medical algorithmic audit is a tool that can be used to better understand the weaknesses of an artificial intelligence system and put in place mechanisms to mitigate their impact. We propose that safety monitoring and medical algorithmic auditing should be a joint responsibility between users and developers, and encourage the use of feedback mechanisms between these groups to promote learning and maintain safe deployment of artificial intelligence systems.en_US
dc.language.isoenen_US
dc.publisherElsevieren_US
dc.relation.urlhttps://www.sciencedirect.com/journal/the-lancet-digital-healthen_US
dc.rightsCopyright © 2022 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
dc.subjectOphthalmologyen_US
dc.titleThe medical algorithmic audit.en_US
dc.typeArticleen_US
dc.source.journaltitleThe Lancet Digital Healthen_US
dc.source.volume4
dc.source.issue5
dc.source.beginpagee384
dc.source.endpagee397
dc.source.countryUnited Kingdom
dc.source.countryUnited Kingdom
dc.source.countryEngland
rioxxterms.versionNAen_US
dc.contributor.trustauthorDenniston, Alastair K
dc.contributor.departmentOphthalmologyen_US
dc.contributor.roleMedical and Dentalen_US
oa.grant.openaccessnaen_US


Files in this item

Thumbnail
Name:
Publisher version

This item appears in the following Collection(s)

Show simple item record