经济学人双语版-眼科AI An AI for an eye

A pioneering ophthalmologist highlights the potential, and the pitfalls, of medical AI


THE BOOKS strewn around Pearse Keane’s office at Moorfields Eye Hospital in London are an unusual selection for a medic. “The Information”, a 500-page doorstop by James Gleick on the mathematical roots of computer science, sits next to Neal Stephenson’s even heftier “Cryptonomicon”, an alt-history novel full of cryptography and prime numbers. Nearby is “The Player of Games” by the late Iain M. Banks, whose sci-fi novels describe a utopian civilisation in which AI has abolished work.

皮尔斯·基恩(Pearse Keane)位于伦敦摩菲尔茨眼科医院(Moorfields Eye Hospital)的办公室里散落着一些书,看起来不像是一个医生会读的。詹姆斯·格雷克(James Gleick)500页厚的《信息简史》(the Information)追溯计算机科学的数学根基。旁边放着尼尔·斯蒂芬森(Neal Stephenson)的《编码宝典》(Cryptonomicon),一本更大部头的、充斥着密码学和质数的另类历史小说。附近有一本《游戏玩家》(The Player of Games),已故作家伊恩·班克斯(Iain M. Banks)的这部科幻小说描述了一个被AI消灭了工作岗位的乌托邦文明。

Dr Keane is an ophthalmologist by training. But “if I could have taken a year or two from my medical training to do a computer-science degree, I would have,” he says. These days he is closer to the subject than any university student. In 2016 he began a collaboration with DeepMind, an AI firm owned by Google, to apply AI to ophthalmology.


In Britain the number of ophthalmologists is not keeping up with the falling cost of eye scans (about £20, or $25, from high-street opticians) and growing demand from an ageing population. In theory, computers can help. In 2018 Moorfields and DeepMind published a paper describing an AI that, given a retina scan, could make correct referral decisions 94% of the time, matching human experts. A more recent paper described a system that can predict the onset of age-related macular degeneration, a progressive disease that causes blindness, up to six months in advance.


But Dr Keane cautions that in practice, moving from a lab demonstration to a real system takes time: the technology is not yet being used on real patients. His work highlights three thorny problems that must be overcome if AI is to be rolled out more quickly, in medicine and elsewhere.


The first is about getting data into a coherent, usable format. “We often hear from medics saying they have a big dataset on one disease or another,” says Dr Keane. “But when you ask basic questions about what format the data is in, we never hear from them again.”


Then there are the challenges of privacy and regulation. Laws guarding medical records tend to be fierce, and regulators are still wrestling with the question of how exactly to subject AI systems to clinical trials.


Finally there is the question of “explainability”. Because AI systems learn from examples rather than following explicit rules, working out why they reach particular conclusions can be tricky. Researchers call this the “black box” problem. As AI spreads into areas such as medicine and law, solving it is becoming increasingly important.


One approach is to highlight which features in the model’s input most strongly affect its output. Another is to boil models down into simplified flow-charts, or let users question them (“would moving this blob change the diagnosis?”). To further complicate matters, notes Dr Keane, techies building a system may prefer one kind of explainability for testing purposes, while medics using it might want something closer to clinical reasoning. Solving this problem, he says, will be important both to mollify regulators and to give doctors confidence in the machines’ opinions.


But even when it is widely deployed, AI will remain a backroom tool, not a drop-in replacement for human medics, he predicts: “I can’t foresee a scenario in which a pop-up on your iPhone tells you you’ve got cancer.” There is more to being a doctor than accurate diagnosis.






猜你 喜欢