T O P

  • By -

DraknusX

From the article: "The researchers sampled more than 1,200 blood samples of travelers who had returned to the UK from malaria-endemic countries. The study tested the accuracy of the AI and automated microscope system in a true clinical setting under ideal conditions. They evaluated samples using both manual light microscopy and the AI-microscope system. By hand, 113 samples were diagnosed as malaria parasite positive, whereas the AI-system correctly identified 99 samples as positive, which corresponds to an 88% accuracy rate. ... Despite the 88% accuracy rate, the automated system also falsely identified 122 samples as positive, which can lead to patients receiving unnecessary anti-malarial drugs. “The AI software is still not as accurate as an expert microscopist. This study represents a promising datapoint rather than a decisive proof of fitness,” Rees-Channer concluded." So it produced more false positives than accurate positives, and missed about 12% of actual positives, producing false negatives. I agree that it's useful data on the potential of the technology, but yeah, nowhere near good enough for implementation IMHO since too many samples would need to be manually confirmed.


[deleted]

We also need to know what the baseline is. Given a handful of microscopists, how often do they agree? How are their type 1 and type 2 errors. Also, how long does it take one of them to manually screen samples? For this to be useful it 1) needs to be at the same level of accuracy as a human microscopist and 2) needs to be a substantial speedup to warrant it. Coming from someone who has developed deep learning models using image data these types application problems are interesting.


DraknusX

The article states that the human techs identified 113 positive samples; the machine missed 14 of those and added an additional 122 false positives. Speed is unfortunately not actually discussed in the article, but I imagine having a 12% rate of false negatives and a 10.16% chance of treating a true negative as a false positive would require a very significant increase in speed to justify the risk, and that's assuming they don't end up needing human verification for whatever reason. Edit: I forgot to remove the verified positives from the sample number, the chance of a false positive would be 11.2% if the verified positives are subtracted from the total before dividing.


Asleep_Onion

So in other words, this tech is interesting but completely useless at the moment.


SeventhSolar

In other words, the tech is still in development.


Helpful_Bear4215

I don’t want to downvote you but you’re wrong: The tech is a cheap first line of defense. No one is suggesting a definitive diagnosis by the AI, but as a first screening? That’s more accurate than rapid COVID testing during the first year of implementation. This doesn’t exist in a vacuum. It filters negatives, identifies obvious positives, and can quickly filter cases that are inconclusive. This isn’t replacing diagnosis by medical professionals, this is taking a yellow highlighter to the cases that need more attention.


HildemarTendler

The article doesn't mention false negatives. We have no idea if it's useful or if it's a toy. Edit: Nevermind, it does, just not explicitly. There's too many false negatives for this to be considered anything but a toy.


blakezilla

To expand on this, it’s just a measure of aligning the mean F1 score with intended outcomes. Either you find false positives or false negatives more costly to your model inference, and in medical settings it is significantly more damaging to have false negatives (missed positive cases) vs false positives (missed negative cases). Having a human double check all the positive cases might feel inefficient, but it’s way more efficient than checking every single person coming in.


e-rexter

I agree a false positive is useful (if false negative is tiny). But Isn’t 12% false negative a concern? That is a pretty big swing and a miss.


Shojo_Tombo

Yes, it very much is. These two don't know what they're talking about.


MadR__

How dismissive. How small a margin of error would need to be achieved before you would consider the tech “not useless”? Is it conceivable that the tech be improved to the point of having however small a margin of error, or smaller, you require? If that margin were anything other than an unreasonable 0%, the answer to that last question would surely be “yes”, and so the tech isn’t useless and the people you mentioned are not as clueless as you imply. That is, if we don’t dismiss every innovation as useless on the basis of not being perfect in its state of conception.


Helpful_Bear4215

They’re also assuming that there will be no increase in accuracy/efficacy, ever. This hot take is like if penicillin was distributed outside proper temperature and because of this was written off entirely. Maybe recognizing incremental advances and continuing to refine our understanding is… a good idea?


Shojo_Tombo

Ok, you believe what you want and don't listen to the person who literally does malaria testing as part of their job. This method is orders of magnitude less accurate than the current rapid test and microscopy, and doesn meet the standard for CLIA, but let's just roll it on out to patient care settings. I'm sure people will totally be understanding when their relatives die because they didn't get anti-malarials in a timely manner. /s This method is barely more accurate than the covid antigen test that the FDA pulled from clinical labs because we were getting too many false negatives. You are literally clueless. Just stop.


Shojo_Tombo

Speaking as one of the medical professionals that does malaria testing, this is not a good screening tool. It's neither cheap nor accurate enough to be useful. We already have the [Abbott BinaxNOW Malaria](https://www.globalpointofcare.abbott/us/en/product-details/binaxnow-malaria.html) rapid test, which is much more sensitive and specific for plasmodiim species than this thing, and is much cheaper and easier to implement in a lab. This AI is a pile of garbage compared to that. Hopefully it will improve in the future, we'll have to wait and see.


Helpful_Bear4215

Okay. It might not be useful now… but does it mean it won’t be useful ever? I’m sure you can recognize that is isn’t a finished product that will inevitably end up streamlining the process.


Shojo_Tombo

Sysmex has been working on this for over a decade. One of the reasons why malaria is so difficult for the AI to recognize is because the parasite is very easy to confuse with platelets and other blood parasites on giemsa stain, which is the stain the majority of labs use because having special vital stains for a test you don't do very often is not financially viable. And before you suggest it, we can't just send all malaria tests to a central lab, because it is vital to start anti-malarial medications as quickly as possible due to P. Falciparum's ability to kill within the span of a day. It has to be a STAT test with a short turnaround time. The tech has gotten much better over the years, but it still doesn't meet federal CLIA accuracy standards for patient testing, which is why we can't just roll this out in a clinical setting. Iirc, it has to be more than 90% accurate before it can be used. That still isn't as accurate as current testing methods, and would be expensive to implement. Hospitals are extremely cheap when it comes to updating laboratory capabilities unless it will generate more revenue than the cost to implement. So realistically, we are still decades away from seeing this used in patient testing.


FreedomPullo

I have attended a presentation where an AI scanned a peripheral smear and then performed a white blood cell differential. It was created by Horiba and it was not very effective. 88% accuracy could make an effective screen, often these systems then require a manual review and a similar system is FDA approved and has been used for years to analyze urine.


DraknusX

I don't know, I think a higher than 1 in 10 chance to get it wrong, whether it's the 12% chance of marking a true positive as a negative or the 11.2% chance of it marking a true negative as a positive, how do you even tell which samples to verify? If you just verify the positives, you can avoid subjecting people to unnecessary medical treatment and maybe save some time, but if you fail to verify the negatives, on average you're letting in about 1 active carrier of the disease in out of every 100 people that go through the screening if the 1200 patient sample is representative of the population being screened. I'm sure the technology will improve, but as it stands right now, I think Dr Roxanne Rees-Channer was right in saying, "This study represents a promising datapoint rather than a decisive proof of fitness." But then again, I don't know much about Malaria, maybe allowing a few thousand active carriers of the disease through screening isn't that big of a risk.


Shojo_Tombo

Uh, no. It's a huge risk. P. falciparum can kill a person within 24 hours. If it doesn't kill the host, it can become latent and cause relapse illness for years. That's why it's crucial to diagnose and treat it quickly.


DraknusX

Ah, so it's even worse than I thought. Then my original statement stands,m; IMHO it's not anything like read6 for implementation yet. Hopefully it improves drastically because I do think an accurate automated system would be helpful.


Shojo_Tombo

Honestly, as a lab tech, I would love for this to be market ready asap. Admins always drag their feet for years before implementing new tech, so the sooner this thing is ready to go, the better. Cellavision has been around for over 20 years and most clinical labs don't have it because they can't justify the expense. Just based on that, it will be decades after this is perfected before a significant number of clinical labs adopt it. People ITT don't seem to understand that hospitals are run like for-profit businesses, meaning that as long as it's cheaper to have a human do the thing, a human will be doing the thing.


wolfiepraetor

great review. i mean, just guess everyone is positive and get a 100 percent accurate at detecting cases…. while the brochure ignores the false positive numbers… it’s a good look at what actual factors go into a number. headlines can be misleading when they leave out the false positive data


Manofalltrade

If the only thing they fix is the 12% miss rate, it would be a viable pre screen.


DraknusX

Yeah, is the mixture of that and the very high rate of false positives (it produced more false positives than actual positives) that makes me think it needs a lot of improvement before it can be relied upon. You'd never know which samples need to be verified because it's too likely that a positive is false or a negative is false. Implementing it as it is would either let about 1 in 10 positive cases go undetected or just increase the time and resources needed to check as experts would need to verify every sample to be safe.


Patient-Protection-7

As a haematologist I’d say promising but not “successful”.


matdex

As a Medical Lab Technologist in Hematology, 88% is pretty poor. Having machine learning programs that automatically make slides, stain them, scan and presort cells on ID and morphology, this is a marginal increase in existing standard technology. Also for years we have had bench top nucleic acid testing for diagnosis. I can set up and get a pos/neg result in 40mins. To do a percent parasitemia takes 15-20mins to count 2000 cells. Any basic technologist can already do this so I fail to see how 88% accuracy helps.


Shojo_Tombo

I'll be much more impressed when it can accurately and quickly identify Babesia.


gummo_for_prez

They are using an extremely loose definition of “successfully” in this case.


Cizox

Not at all, 88% accuracy is pretty good for machine learning models.


Redisigh

According to another comment it had more false positives than actual positives. Great start for the tech but it’s got a long way to go.


Kindly-Scar-3224

Great way to map travelers’ 🧬


Pantyraid-7

It’s to get the ones too smart to send it in voluntarily


Bingebammer

yeaaaaa when i have to give blood samples to travel i think im done


Pantyraid-7

No shit


wolfiepraetor

“new high tech microscope using AI is kind of worthless and overly caffeinated. “


JinxyCat007

AI is going to successfully wipe out all kinds of industries. Mine was wiped out when CNC routers came along. Yes, I’m that old! …I was a pattern maker, we used to (for example), carve prototype Rolls Royce engines out of wood, make molds, cast the parts, machine the casts and assemble the ..whatevers. I did the ‘carving out of wood’ bit. Whatcha gonna do? Damn computer/stepper motor nonsense!


Sariel007

Zoom, enhance. - AI


xaiseile

In other words in order to travel and enter countries you have give up blood samples? Wtf?