Simplifying N:C Ratio Analysis in Cytology: An Easy Approach Using Image Analysis Software

25 Apr, 2023 | IKOSA AI, Use case

In recent times histological assessment is considered the gold standard for assessing cell and tissue malignancy or the absence thereof. However, it is still a very time-consuming method, the results of which may strongly depend on differing interpretations from pathologists. Certain characteristics have been identified as prevalent markers for malignancy. One example would be an enlarged nucleus. This led to the development of the so-called nucleus-to-cytoplasmic ratio (N:C ratio), which can be defined as the ratio of nuclear area to total cell area and is especially relevant in cervical and urogenital cytology. (Moor et al., 2021) Even though this method proves to be practical in many tissue types, there are still some types that show lower N:C ratios despite malignancy and vice versa. Also, immature cells typically show a shifted N:C ratio (Sung et al., 2014).

This calls for an automated assessment method to support researchers and healthcare professionals in cytology. 

New techniques have been on the rise to quantify the N:C ratio on a more objective level using pre-trained AI models and computer vision technology. The consistent advancements in AI are transforming the dynamics in the field of research and diagnostics.

Utilizing the advanced deep learning technology available in the IKOSA AI software, we analyzed the N:C ratios in Pap smear samples. Benjamin aims to offer you a straightforward image analysis method that employs instance segmentation and highlights the advantages of this software in cytology. He also intends to provide valuable insights into the significance of proper training and attention to crucial factors such as precise annotation and data validity to achieve reliable and consistent outcomes.

Did you know?

The N:C ratio cannot be used as a grading method due to its highly subjective nature and lack of quantification. Despite its widespread use, little is known about the reproducibility and accuracy of the results in a non-automated assessment.

Materials and Methods

For the showcase of N:C ratio analysis in cervical cytology, I used Pap smear samples. All samples were obtained in standard procedures and were prepared and stained using the Papanicolaou technique. The images were captured at 20x magnification using a slide scanner.

I annotated the data by marking Regions Of Interest (ROI) on representative images including labels for the cells and their nuclei.

ROI with annotation
ROI with the annotation of the target objects including cells and their nuclei.

To start off with the application training process, I selected instance segmentation as the training type.

As the entire dataset appears quite homogeneous with no major variations in cell morphology or staining, I decided to use a rather small number of images for the application training. Therefore, the application was trained using a manual data split of four images, which means that I selected the images for training and validation separately.

manual data split
The manual data split included three images to train the application and one image for the validation of the trained model.

To also improve the background recognition of the trained model, I provided an image with ROIs representing only the background. The total training time was about 3 hours using the extended training option.

Important!

Note that the training time depends on the amount and complexity of the data and the number of validation images.

background
ROI represents only the background for a more reliable distinction between the background and features.

App training results

The IKOSA software provided me with a detailed Application Training Report on the performance, based on the validation image I selected for the application training.

Please keep in mind, the more closely a trained application matches the parameters listed below, the better its performance:

  • IoU (Intersection over Union): value up to 1
  • Precision [%]: value up to 100%
  • Recall [%]: value up to 100%
  • Average Precision: value up to 1
  • False Positives: value down to 0

The overall training performance for my application including all ROIs for the nuclei detection as well as for the cell detection is shown in the images below.

If you want to learn more about the validation parameters such as Precision or Recall and receive more detailed information, please have a look at our Knowledge Base.

training performance for nuclei detection
Overall training performance for nuclei detection.
training performance for cell detection
Overall training performance for cell detection.

In addition to providing overall training information, the training report includes the classification of true-positive, false-positive, and false-negative detected instances. These parameters are calculated based on the ground-truth labels, which are annotations made by the user on the selected image for training validation. Subsequently, the deep learning model is trained to detect morphological structures, similar to ground truth annotations.

nuclei detection classification
Classifications of instances for nuclei detection.
cell detection classification
Classifications of instances for cell detection.

Analysis Results

Once the app training is complete and complies with your quality requirements, you can deploy the app and run the analysis on your images. 

Here is an example of an ROI analysis with the app I have trained before:

nuclei detection
Nuclei detection.
Cell detection
Cell detection.

The instance segmentation app provides a CSV file with comprehensive information on the analyzed objects such as Object ID, Number of Objects, ROI area, and Object area among others.

I calculated the N:C ratio based on the given object area of the detected cells and the corresponding cell nuclei. As this data is available in the analysis file, dividing the nuclear area by the cytoplasmic area is just as easy.

NC ratio
The N:C ratio is calculated by dividing the size of the nucleus by the size of the cytoplasm in a given cell.

How to improve training and analysis results

The overall performance of the App I have trained for this showcase seems to work quite well. But as you can certainly imagine, it took me some time to get there. That’s why I would like to share a few valuable tips with you and not deprive you of the process. So let’s go!

To improve the accuracy and reproducibility of your analysis, several factors must be considered carefully. Besides proper sample preparation and consistent imaging conditions, you can influence the outcome of your analysis results by following some general rules:

  • It is important to ensure the availability of an adequate amount of high-quality data for the application training. This includes images that accurately represent the range of features present in the target samples.
  • Precise annotation of the data is crucial to ensure accurate identification and segmentation of relevant features such as cell bodies and nuclei.
  • The training process should be conducted over a sufficient period with ample amounts of data to allow for the development of a robust model.
  • Those performing the analysis should be well-trained and experienced in using the software and interpreting the results.

After selecting representative images and spending time annotating some objects, I wanted to test how well the analysis worked with the information I provided to train the app. However, the initial results were modest, to say the least. As you can see in the image below, the segmentation was not satisfying and some cells also were fragmented.

superficial ROI

I discovered an ROI in one of the images that lacked annotated objects, which led to confusion in the training model. To address this issue, I annotated all the objects in that ROI and retrained the application. The new training led to significantly improved segmentation results, as demonstrated in the images below.

superficial ROI 2

To further improve the segmentation, I decided to add more ROIs and annotations and initiated another round of application training. As shown in the next image, the additional input information effectively segmented even slightly overlapping objects successfully.

additional annotations to improve segmentation results
Additional annotations can help to improve your segmentation results.

Does this now mean that the app is flawless and capable of accurately segmenting each and every object? The answer is no. However, I also did not set out to create a perfect app for this showcase. In my opinion, achieving 100 % perfect outcomes in image analysis should not be the goal and is often unrealistic from today’s perspective. Therefore, the main objective should be continuously improving the results through iterative testing and refinement.

Conclusion

I know the N:C ratio measurement can be performed manually, but it is often more efficient and accurate to use image analysis software such as IKOSA AI to automatically detect and calculate the cell area. As you see, training the software to recognize and segment the nuclei and cytoplasm in an image is a process. But once established it saves valuable time, reduces human error, and can be applied to your entire dataset. 

As a life sciences professional, I understand the importance of reliable and accurate analysis of cellular samples and that N:C ratio measurement is a widely used method that provides valuable information for diagnosis and prognosis. 

With the help of advanced image analysis software, cell detection, and N:C ratio analysis can be performed with high accuracy and reproducibility. However, proper training experience, and attention to important factors such as training time, data validity, and precise annotation are critical to achieving reliable and consistent results. As with any analysis method, there are limitations. But with careful consideration and a systematic approach, automated analysis approaches can greatly aid in the interpretation of cytological samples and ultimately improve your research outcome.

In this showcase, the trained application provided me with reliable and reproducible results, with consistent measurements obtained across all images analyzed. With these considerations in mind, researchers and technicians can confidently incorporate automated N:C ratio analysis into their work.

If you have any questions about the use case, the application training process, or anything else, we’re glad to assist you!

Our authors:

KML Vision Team Benjamin Obexer Lead Content Writer

Benjamin Obexer

Lead content writer, life science professional, and simply a passionate person about technology in healthcare

KML Vision Team Elisa Opriessnig Content writer

Elisa Opriessnig

Content writer focused on the technological advancements in healthcare such as digital health literacy and telemedicine.

References

Sebastian, J. A., Moore, M. J., Berndl, E. S. L., & Kolios, M. C. (2021). An image-based flow cytometric approach to the assessment of the nucleus-to-cytoplasm ratio. PloS one, 16(6), e0253439. https://doi.org/10.1371/journal.pone.0253439

Sung, W. W., Lin, Y. M., Wu, P. R., Yen, H. H., Lai, H. W., Su, T. C., Huang, R. H., Wen, C. K., Chen, C. Y., Chen, C. J., & Yeh, K. T. (2014). High nuclear/cytoplasmic ratio of Cdk1 expression predicts poor prognosis in colorectal cancer patients. BMC cancer, 14, 951. https://doi.org/10.1186/1471-2407-14-951

Categories

Join our newsletter