Flyer

Archives of Medicine

  • ISSN: 1989-5216
  • Journal h-index: 17
  • Journal CiteScore: 4.25
  • Journal Impact Factor: 3.58
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Awards Nomination 20+ Million Readerbase
Indexed In
  • Genamics JournalSeek
  • China National Knowledge Infrastructure (CNKI)
  • Directory of Research Journal Indexing (DRJI)
  • OCLC- WorldCat
  • Proquest Summons
  • Publons
  • Geneva Foundation for Medical Education and Research
  • Euro Pub
  • Google Scholar
  • Secret Search Engine Labs
Share This Page

Commentary - (2022) Volume 14, Issue 11

Artificial intelligence in anatomic pathology: Challenges with development, use, and regulation

Jerome Balis*
 
Department of Microbiology, School of Mathematical and Natural Sciences, University of Venda, Thohoyandou, South Africa
 
*Correspondence: Jerome Balis, Department of Microbiology, School of Mathematical and Natural Sciences, University of Venda, Thohoyandou, South Africa, Email:

Received: 01-Nov-2022, Manuscript No. ipaom-23-13366; Editor assigned: 03-Nov-2022, Pre QC No. P-13366; Reviewed: 15-Nov-2022, QC No. Q-13366; Revised: 21-Nov-2022, Manuscript No. R-13366; Published: 28-Nov-2022

Description

The potential for systems based on machine learning (ML) and artificial intelligence (AI) to replace or supplant physicians in numerous medical fields, including anatomic pathology (AP), has received a lot of attention recently. Given the frequent failure of AI-based solutions, which frequently exhibit unpredictable and/or incorrect results when confronted with data or patterns it has not encountered before3, ML algorithms operate in divergent or extrapolatory mode, in which false results are possible or even likely, it is unlikely that AI-based tools will completely replace physicians in the near future. In light of this inherent limitation, AI tools will see their greatest application and provide the greatest benefit by immediately providing providers with additional assistance, according to a number of recent studies and reports in the general media (Harvard Business Review, last). If medical specialties are willing to embrace this new approach, repetitive tasks currently performed by physicians, particularly pathologists, may be suitable for additional AI-based assistance [1].

In fact, the evidence that has been collected up to this point shows that a competent medical generalist can perform better than a medical specialist who does not use an AI-based solution when working with an AI tool that has been properly developed. This is consistent with the results of a survey of 487 pathologists from 54 countries. Seventyone percent of the respondents thought that AI tools could improve their diagnostic efficiency, despite the fact that the majority of those polled believed that the decision-making process for diagnostics should primarily be performed by humans. Pathologists' diagnostic decisions are prone to being influenced by AI, which introduces novel sources of bias, so the implementation of machine-based assistance in clinical settings requires caution as well. Only a small number of AI-based prediction tools have made it onto the market and proven to be useful in real-world clinical settings. However, while some studies emphasize these narrow AI algorithms' ability to excel at a particular task, the majority ignore the greater complexity of clinical practice conditions. These AI tools would likely fail or exhibit lower accuracies than initially reported because they would likely be exposed to a much wider variety of heterogeneous cases and data patterns in clinical practice. The subpar implementation of IBM Watson (IBM Watson Health, New York, NY) is a well-known illustration of this phenomenon [2].

The process of developing a machine or deep-learning model takes a long time—weeks or months. It starts with the ID of an issue for which man-made intelligence could be useful, trailed by information assortment, information change, and eventually, model preparation. If this procedure is successful, rigorous validation studies are carried out prior to the implementation of a reliable algorithm in a clinical setting—clinical laboratory or otherwise. Before commercial AI products can be legally marketed for clinical use, approval for their marketing is also required, such as approval or clearance from the Conformité Européenne or the US Food and Drug Administration (FDA).The ability of a product to be successfully incorporated into the workflow of pathologists is the ultimate test. The incorporation of FDA-approved computer-assisted automated Papanicolaou-smear screening into cytopathology was an early example in this regard. Sadly, similar AI tools intended for use elsewhere in AP must still overcome a number of obstacles before they can be implemented. Prior to widespread AP adoption, obstacles in AI development, deployment, and regulation are discussed in this review [3].

For AI-based algorithms to be successfully adopted and implemented in practice, pathologists must have buyin. These applications must fulfill gaps or unmet needs without interfering with clinical workflow and be of clinical and practical use. Mitosis detection14, rare event identification, tumour percentage calculation, and other tasks that have been found to be repetitive, tedious, or prone to higher interobserver variability are examples. A notable illustration of this is the Ki-67 index scoring of neoplasms, which involves counting hundreds or thousands of tumour cells on tissue sections in a repetitive process that is best suited for automated computation. For estimating the Ki-67 index via light microscopy through a process known as eyeballing, some pathologists are forced to use an improvised workaround at the expense of accuracy and reproducibility. This example clearly demonstrates a situation in which AI tools can be developed with the input of pathologists to improve both diagnostic quality and pathologists' efficiency. However, it is unlikely that AI tools will be utilized in routine clinical practice if they are developed solely for novelty or intellectual appeal. As a result, AI startups need to avoid falling into the trap of looking for the elusive killer application and be wary of the shiny-object syndrome. Instead, these businesses ought to concentrate on tools that are essential to the work of pathologists. They should also be aware that some of the low-hanging fruit in this regard might be relatively insignificant but nonetheless significant responsibilities [4].

Choosing a data set for AI algorithm generation does not follow a standard procedure and can be challenging on its own. To achieve high model accuracy, significant performance gains, and increased generalizability, for instance, convolutional neural networks typically require large-scale training sets made up of hundreds or thousands of slides. In contrast, in situations involving transfer learning, small data sets of less than 100 digital slides may be sufficient. Because there may only be a very small number of slides available for rare diseases, some people use data-augmentation techniques to simulate larger-scale data sets. As a result, the actual number of slides required for an AI task varies from problem to problem [5].

Conclusion

As more categories are added to an AI-classification task, the number of slides and images required for algorithm training will naturally increase (for instance, classifying two types of cancer requires fewer training samples than building a model for classifying five types of cancer). However, the addition of publicly accessible data sets can help image repositories that are curated locally; Budgetary constraints, copyright concerns, and concerns about confidentiality mean that pathology does not have a lot of these data sets. The cancer genome atlas is one publicly accessible data set that provides digital slides and molecular metadata for many cases. It is difficult to train AI models with clinical-grade histopathology because the cancer genome atlas only contains a small number of cases from many diagnostic subsets. Another useful, albeit limited, source of data sets for deep-learning algorithms are public challenges.

Acknowledgement

None.

Conflict of Interest

None.

References

  1. Jones BA and Novis DA. Follow-up of abnormal gynecologic cytology: a College of American Pathologists Q-Probes study of 16 132 cases from 306 laboratories. Arch Path Lab. 2000;124(5):665-671.
  2. Google Scholar, Crossref, Indexed at

  3. Bruner JM, Inouye L, Fuller GN et al. Diagnostic discrepancies and their clinical impact in a neuropathology referral practice. Cancer. 1997;79(4):796-803.
  4. Google Scholar, Crossref, Indexed at

  5. Zardawi IM, Bennett G, Jain S et al. Internal quality assurance activities of a surgical pathology department in an Australian teaching hospital. J Clin Pathol. 1998;51(9):695-699.
  6. Google Scholar, Crossref, Indexed at

  7. Safrin RE and Bark CJ. Surgical pathology signout: Routine review of every case by a second pathologist. Am J Surg Pathol. 1993;17(11):1190-1192.
  8. Google Scholar, Indexed at

  9. Ramsay AD and Gallagher PJ. Local audit of surgical pathology: 18 month's experience of peer review-based quality assessment in an English teaching hospital. Am J Surg Pathol. 1992;16(5):476-482.
  10. Google Scholar, Indexed at