Flyer

Archives of Medicine

  • ISSN: 1989-5216
  • Journal h-index: 17
  • Journal CiteScore: 4.25
  • Journal Impact Factor: 3.58
  • Average acceptance to publication time (5-7 days)
  • Average article processing time (30-45 days) Less than 5 volumes 30 days
    8 - 9 volumes 40 days
    10 and more volumes 45 days
Awards Nomination 20+ Million Readerbase
Indexed In
  • Genamics JournalSeek
  • China National Knowledge Infrastructure (CNKI)
  • Directory of Research Journal Indexing (DRJI)
  • OCLC- WorldCat
  • Proquest Summons
  • Publons
  • Geneva Foundation for Medical Education and Research
  • Euro Pub
  • Google Scholar
  • Secret Search Engine Labs
Share This Page

Commentary - (2022) Volume 14, Issue 9

Obstacles to the development, use and regulation of artificial intelligence in anatomic pathology

Sofie Lusingu*
 
Department of Microbiology, School of Mathematical and Natural Sciences, University of Venda, Thohoyandou, South Africa
 
*Correspondence: Sofie Lusingu, Department of Microbiology, School of Mathematical and Natural Sciences, University of Venda, Thohoyandou, South Africa, Email:

Received: 03-Sep-2022, Manuscript No. ipaom-22-13253; Editor assigned: 05-Sep-2022, Pre QC No. P-13253; Reviewed: 19-Sep-2022, QC No. Q-13253; Revised: 24-Sep-2022, Manuscript No. R-13253; Published: 30-Sep-2022

Description

Machine learning (ML) and artificial intelligence (AI)-based systems' potential to replace or supplant physicians in numerous medical specialties, including anatomic pathology (AP), has recently received a lot of attention. It is unlikely that AI-based tools will completely replace physicians in the near future, given the frequent failure of AI-based solutions, which frequently exhibit unpredictable and/or incorrect results when confronted with data or patterns it has not encountered before.3 In such circumstances, ML algorithms operate in divergent or extrapolatory mode, in which spurious results are possible or even likely. Several recent studies and reports in the general media have claimed that ML models are able to surpass human performance in various scenarios (Harvard Business Review, last Because of this inherent limitation, AI tools will see their greatest application and provide the greatest benefit by immediately providing providers with additional assistance. Repetitive tasks currently performed by physicians, specifically pathologists, may be suitable for additional AI-based assistance if medical specialties are willing to embrace this new approach [1].

In fact, the evidence that has been collected up to this point shows that a competent medical generalist can perform better than a medical specialist who does not use an AI-based solution when working with an AI tool that has been properly developed. This is consistent with the results of a survey of 487 pathologists from 54 countries. Seventyone percent of the respondents thought that AI tools could improve their diagnostic efficiency, despite the fact that the majority of those polled believed that the decision-making process for diagnostics should primarily be performed by humans. Pathologists' diagnostic decisions are prone to being influenced by AI, which introduces novel sources of bias, so the implementation of machine-based assistance in clinical settings requires caution as well. Only a small number of AI-based prediction tools have made it onto the market and proven to be useful in real-world clinical settings. However, while some studies emphasize these narrow AI algorithms' ability to excel at a particular task, the majority ignore the greater complexity of clinical practice conditions. These AI tools would likely fail or exhibit lower accuracies than initially reported because they would likely be exposed to a much wider variety of heterogeneous cases and data patterns in clinical practice. The subpar implementation of IBM Watson (IBM Watson Health, New York, NY) is a well-known illustration of this phenomenon [2].

The process of developing a machine or deep-learning model takes a long time—weeks or months. It starts with the ID of an issue for which man-made intelligence could be useful, trailed by information assortment, information change, and eventually, model preparation. If this procedure is successful, rigorous validation studies are carried out prior to the implementation of a reliable algorithm in a clinical setting—clinical laboratory or otherwise. Before commercial AI products can be legally marketed for clinical use, approval for their marketing is also required, such as approval or clearance from the Conformité Européenne or the US Food and Drug Administration (FDA).The ability of a product to be successfully incorporated into the workflow of pathologists is the ultimate test. The incorporation of FDA-approved computer-assisted automated Papanicolaou-smear screening into cytopathology was an early example in this regard. Sadly, similar AI tools intended for use elsewhere in AP must still overcome a number of obstacles before they can be implemented. Prior to widespread AP adoption, obstacles in AI development, deployment, and regulation are discussed in this review [3].

For AI-based algorithms to be successfully adopted and implemented in practice, pathologists must have buyin. These applications must fulfill gaps or unmet needs without interfering with clinical workflow and be of clinical and practical use. Mitosis detection14, rare event identification, tumour percentage calculation, and other tasks that have been found to be repetitive, tedious, or prone to higher interobserver variability are examples. A notable illustration of this is the Ki-67 index scoring of neoplasms, which involves counting hundreds or thousands of tumour cells on tissue sections in a repetitive process that is best suited for automated computation. For estimating the Ki-67 index via light microscopy through a process known as eyeballing, some pathologists are forced to use an improvised workaround at the expense of accuracy and reproducibility. This example clearly demonstrates a situation in which AI tools can be developed with the input of pathologists to improve both diagnostic quality and pathologists' efficiency. However, it is unlikely that AI tools will be utilized in routine clinical practice if they are developed solely for novelty or intellectual appeal. As a result, AI startups need to avoid falling into the trap of looking for the elusive killer application and be wary of the shiny-object syndrome. Instead, these businesses ought to concentrate on tools that are essential to the work of pathologists. They should also be aware that some of the low-hanging fruit in this regard might be relatively insignificant but nonetheless significant responsibilities [4].

Choosing a data set for AI algorithm generation does not follow a standard procedure and can be challenging on its own. To achieve high model accuracy, significant performance gains, and increased generalizability, for instance, convolutional neural networks typically require large-scale training sets made up of hundreds or thousands of slides. In contrast, in situations involving transfer learning, small data sets of less than 100 digital slides may be sufficient. Because there may only be a very small number of slides available for rare diseases, some people use data-augmentation techniques to simulate larger-scale data sets. As a result, the actual number of slides required for an AI task varies from problem to problem [5].

The number of slides and images required for algorithm training will naturally increase as more categories are added to an AI-classification task (for example, classifying two types of cancer requires fewer training samples than building a model for classifying five types of cancer). Locally curated image repositories can benefit from the addition of publicly accessible data sets however; there aren't many of these data sets in pathology because of budgetary constraints, copyright, and confidentiality concerns. One publicly accessible data set that provides digital slides and molecular metadata for a significant number of cases is the cancer genome atlas. Sadly, the cancer genome atlas only has a small number of cases from many diagnostic subsets, making it difficult to train AI models with clinical-grade histopathology. Public challenges for the creation of deeplearning algorithms are yet another useful, albeit limited, source of data sets.

Acknowledgement

None.

Conflict of Interest

None.

REFERENCES

  1. Bruner JM, Inouye L, Fuller GN et al. Diagnostic discrepancies and their clinical impact in a neuropathology referral practice. Cancer. 1997;79(4):796-803.
  2. Google Scholar, Crossref, Indexed at

  3. Zardawi IM, Bennett G, Jain S et al. Internal quality assurance activities of a surgical pathology department in an Australian teaching hospital. J Clin Pathol. 1998;51(9):695-699.
  4. Google Scholar, Crossref, Indexed at

  5. Safrin RE and Bark CJ. Surgical pathology signout: Routine review of every case by a second pathologist. Am J Surg Pathol. 1993;17(11):1190-1192.
  6. Google Scholar, Indexed at

  7. Ramsay AD and Gallagher PJ. Local audit of surgical pathology: 18 month's experience of peer review-based quality assessment in an English teaching hospital. Am J Surg Pathol. 1992;16(5):476-482.
  8. Google Scholar, Indexed at

  9. Jones BA and Novis DA. Follow-up of abnormal gynecologic cytology: A College of American pathologists Q-Probes study of 16 132 cases from 306 laboratories. Arch Path Lab. 2000;124(5):665-71.
  10. Google Scholar, Crossref, Indexed at