The Algorithmic Doctor: Bridging the Transparency Gap in AI-Driven Healthcare

Review Article | DOI: https://doi.org/10.31579/2767-7370/146

The Algorithmic Doctor: Bridging the Transparency Gap in AI-Driven Healthcare

  • Paraschos Maniatis

Athens University of Economics and Business Patision 76 Gr-15772 Athns-Greece.

*Corresponding Author: Paraschos Maniatis, Athens University of Economics and Business Patision 76 Gr-15772 Athns-Greece.

Citation: Paraschos Maniatis, (2025), The Algorithmic Doctor: Bridging the Transparency Gap in AI-Driven Healthcare, J New Medical Innovations and Research, 6(4); DOI:10.31579/2767-7370/146

Copyright: © 2025, Paraschos Maniatis. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: 07 March 2025 | Accepted: 28 March 2025 | Published: 03 April 2025

Keywords: artificial Intelligence (AI); healthcare; transparency; explainable AI (XAI); interpretability; algorithmic bias; trust, patient safety; ethical AI; medical decision-making

Abstract

Artificial intelligence (AI) is rapidly transforming healthcare, offering the potential to improve diagnostic accuracy, personalize treatment plans, and optimize resource allocation. However, the increasing complexity and opacity of AI algorithms, particularly in deep learning models, pose a significant challenge to trust, accountability, and ultimately, patient safety. This research investigates the transparency gap in AI-driven healthcare, exploring the perceptions of healthcare professionals and patients regarding the explainability and interpretability of AI-based diagnostic and treatment recommendations. Through surveys, we examine the factors contributing to the transparency gap, the impact on trust and adoption, and potential strategies for bridging this gap through explainable AI (XAI) techniques and improved communication. The findings highlight the urgent need for enhanced transparency in AI-driven healthcare to ensure responsible and ethical deployment of these powerful technologies.

Introduction

The integration of artificial intelligence (AI) into healthcare is no longer a futuristic concept but a rapidly evolving reality. AI algorithms are being deployed across a wide spectrum of applications, from analyzing medical images to predict disease outbreaks, assisting in surgical procedures to personalizing drug therapies. The potential benefits are immense: improved diagnostic accuracy, faster treatment delivery, reduced costs, and enhanced patient outcomes.

However, the rise of AI in healthcare is not without its challenges. One of the most significant hurdles is the lack of transparency and explainability in many AI models, particularly deep learning algorithms often referred to as "black boxes." These algorithms can achieve remarkable accuracy, but their internal workings remain largely opaque, making it difficult to understand why they arrive at specific conclusions. This lack of transparency, or the "transparency gap," raises serious concerns about trust, accountability, and the potential for algorithmic bias to perpetuate existing health disparities.

This research addresses the critical need to bridge the transparency gap in AI-driven healthcare. By investigating the perspectives of both healthcare professionals and patients, we aim to understand the factors contributing to this gap, its impact on trust and acceptance, and potential strategies for fostering greater transparency through explainable AI (XAI) techniques and improved communication.

Research Objectives

This research aims to achieve the following objectives:

  • Objective 1: To assess the current level of understanding and awareness of AI applications in healthcare among healthcare professionals (doctors, nurses, and other allied health staff) and patients.
  • Objective 2: To identify the key factors contributing to the transparency gap in AI-driven healthcare, focusing on the technical limitations of AI algorithms, the complexity of medical data, and the lack of standardized reporting practices.
  • Objective 3: To examine the impact of the transparency gap on trust in AI-based diagnostic and treatment recommendations among healthcare professionals and patients.
  • Objective 4: To evaluate the effectiveness of different XAI techniques in enhancing the interpretability and explainability of AI models used in healthcare.
  • Objective 5: To develop recommendations for bridging the transparency gap through improved communication strategies, standardized reporting practices, and the ethical design and deployment of AI algorithms in healthcare.

Literature Review

1. AI in Healthcare: Benefits and Challenges

Artificial intelligence (AI) has revolutionized healthcare by enhancing diagnostic precision, personalizing treatment plans, and optimizing medical workflow efficiency (Topol, 2019). AI-powered tools such as deep learning algorithms have demonstrated remarkable success in radiology, pathology, and predictive analytics (Esteva et al., 2017). However, despite these benefits, challenges remain, including data privacy, algorithmic bias, and the lack of standardized frameworks for validation and regulatory approval (Yu et al., 2018).

2. The Transparency Gap in AI-Driven Healthcare

One of the major concerns with AI applications in medicine is the lack of transparency, particularly in deep learning models, often regarded as "black boxes" (Rudin, 2019). The complexity of these models makes it difficult to interpret their decision-making processes, creating skepticism among healthcare professionals and patients (Ghassemi et al., 2020). This opacity can lead to resistance in clinical adoption and increased liability concerns (London, 2019). Furthermore, algorithmic biases, often stemming from unrepresentative training data, exacerbate disparities in healthcare outcomes (Obermeyer et al., 2019).

3. Explainable AI (XAI) in Healthcare

Explainable AI (XAI) aims to improve model interpretability by offering insights into how AI-driven decisions are made. Several XAI techniques, such as Local Interpretable Model-Agnostic Explanations (LIME), Shapley Additive Explanations (SHAP), and attention mechanisms, have been proposed to enhance transparency (Adadi & Berrada, 2018). Studies have shown that incorporating XAI methods can increase clinicians' trust in AI-driven diagnostics and treatment recommendations (Holzinger et al., 2017). However, the effectiveness of these techniques varies based on the complexity of the medical condition and the interpretability of the model’s outputs (Tjoa & Guan, 2020).

4. Trust in AI: Factors Influencing Adoption

Trust plays a pivotal role in AI adoption within healthcare. Research suggests that transparency, reliability, and fairness significantly impact clinicians' and patients' willingness to rely on AI-based systems (Lee & See, 2004). Additionally, a lack of standardized communication regarding AI decision-making processes can further contribute to mistrust (Caruana et al., 2015). Studies indicate that even when AI demonstrates superior performance compared to human counterparts, low interpretability can hinder its acceptance (Shortliffe & Sepúlveda, 2018).

5. Ethical Considerations and Algorithmic Bias

AI-driven healthcare systems must address ethical concerns such as data privacy, patient autonomy, and bias mitigation (Floridi et al., 2018). Algorithmic bias remains a significant challenge, as biased training datasets can lead to disparities in medical recommendations across different demographic groups (Mehrabi et al., 2021). For instance, a study by Obermeyer et al. (2019) highlighted that an AI model used for predicting healthcare needs systematically underestimated the health risks of Black patients due to biased training data. Addressing these issues requires more rigorous fairness-aware AI models and ethical oversight (Leslie, 2019).

6. Communicating AI Decisions to Non-Technical Audiences

Effective communication of AI-generated medical insights is crucial for both clinicians and patients. Studies suggest that user-friendly visualizations, simplified explanations, and standardized reporting formats can enhance comprehension and acceptance of AI recommendations (Lipton, 2018). Furthermore, integrating AI explanations within clinical decision support systems can facilitate informed decision-making and reduce clinician cognitive load (Rajkomar et al., 2019).

Methodology

This research will employ quantitative collection and analysis techniques to provide a comprehensive understanding of the transparency gap in AI-driven healthcare.

  • Phase 1: Quantitative Survey: A structured survey will be administered to a sample of healthcare professionals (doctors, nurses, and allied health staff) and patients. The survey will assess their understanding and perceptions of AI applications in healthcare, their level of trust in AI-based diagnostic and treatment recommendations, and their concerns regarding the lack of transparency in AI algorithms. The survey will use Likert scales (e.g., strongly agree to strongly disagree) to measure attitudes and perceptions. Demographic information will also be collected.

Data Analysis:

  • Quantitative Data: Survey data will be analyzed using descriptive statistics (means, standard deviations, frequencies) and inferential statistics (t-tests, ANOVA, correlation analysis) to identify significant relationships between variables. Statistical software such as SPSS or R will be used for data analysis.

Research Questions

This research seeks to answer the following key questions:

  1. What is the current level of awareness and understanding of AI applications in healthcare among healthcare professionals and patients?
  2. What are the primary factors contributing to the transparency gap in AI-driven healthcare?
  3. How does the transparency gap impact trust in AI-based diagnostic and treatment recommendations among healthcare professionals and patients?
  4. To what extent can XAI techniques enhance the interpretability and explainability of AI models used in healthcare?
  5. What are the most effective strategies for bridging the transparency gap through improved communication, standardized reporting practices, and ethical AI design?

Results Received by The Questionnare: The questionnaire was sent to 200 individuals

Statistical Summary of AI Survey Responses

Conclusion

 

Statistical Analysis

Statistical Analysis Report: Transparency and Trust in AI-driven Healthcare

1. Introduction

This report presents a statistical analysis of survey responses related to AI-driven healthcare, focusing on trust, transparency, and familiarity with AI technologies among healthcare professionals and patients. The dataset was analyzed using descriptive statistics, inferential tests (t-tests, ANOVA, correlation analysis), and regression modeling to identify key patterns and relationships.

2. Descriptive Statistics

Key Findings:

  • Role Distribution: Respondents included doctors, nurses, allied health professionals, patients, and others, with "Other" being the most common category.
  • Experience: The most frequent response was 1–5 years of experience.
  • AI Interaction: 37% had never interacted with AI in healthcare.
  • AI Understanding: The most frequent response was "Very High".
  • Transparency Perception: The most common response was "Very Transparent".
  • Trust in AI: "Unsure" was the most frequent response.
  • Familiarity with Explainable AI (XAI): 34.5% of respondents were familiar with XAI.
  • Factors influencing trust: The most commonly cited factor for increasing trust was "Regulation and ethical oversight of AI".

3. Inferential Statistics

3.1 T-test: Trust in AI (Doctors vs. Patients)                               

  • T-statistic = 0.199
  • P-value = 0.843
  • Conclusion: No statistically significant difference in trust levels between doctors and patients.

3.2 ANOVA: Transparency Perception Across Roles

  • F-statistic = 0.387
  • P-value = 0.818
  • Conclusion: No significant differences in perceived transparency among different roles.

3.3 Correlation Analysis            

                                                    

Conclusion: Higher AI understanding is slightly associated with higher perceived transparency, but it does not strongly predict trust.
3.4 Regression Analysis: Predictors of Trust in AI                       

  • Conclusion: Familiarity with XAI is the strongest predictor of trust in AI. Transparency perception and AI understanding do not significantly impact trust.

Discussion

Key Insights:

  1. Transparency alone does not drive trust: Simply making AI more explainable does not necessarily lead to higher trust. Other factors, such as ethics, regulatory oversight, and user experience, may play a larger role.
  2. AI Understanding does not guarantee trust: Having high AI knowledge does not necessarily lead to increased trust in AI-driven decisions.
  3. Explainable AI (XAI) plays a key role: Respondents familiar with XAI were more likely to trust AI.

Implications:

  • AI developers should focus on user-friendly explanations rather than just making models more transparent.
  • Healthcare professionals need more exposure to XAI techniques to increase trust.
  • Policy and regulation may be stronger trust drivers than transparency alone.

Conclusion

This study highlights that while AI transparency is important, it does not directly translate to trust. Familiarity with XAI is the only factor that showed a meaningful impact on trust levels. Future AI-driven healthcare solutions should focus not just on explainability but also on ethical frameworks, clear regulations, and improved user engagement to enhance trust.

Recommendations

  1. Improve AI Education & Training: Increase awareness of XAI techniques among healthcare professionals.
  2. Enhance AI Communication Strategies: Provide clearer, user-friendly explanations rather than just technical transparency.
  3. Regulatory & Ethical Oversight: Implement policies that ensure AI-driven decisions are fair, ethical, and well-regulated.
  4. Personalization of AI Recommendations: Tailor AI explanations based on the audience's expertise level (e.g., doctors vs. patients).

By implementing these strategies, we can bridge the transparency gap and foster trust in AI-driven healthcare solutions.

Statistical Results of Additional Statistical Tests Refining the Findings

1. Chi-Square Test: Association Between Role and Trust in AI

  • Chi-Square Value: 16.62
  • P-Value: 0.410
  • Degrees of Freedom: 16

Interpretation:

  • The p-value (0.410) is greater than 0.05, indicating no statistically significant relationship between professional role (Doctor, Nurse, etc.) and trust in AI.
  • This suggests that trust levels in AI are similar across different roles, meaning doctors, nurses, allied health professionals, and patients do not significantly differ in their trust in AI.

2. Factor Analysis (PCA): Key Components of Transparency & Trust

  • Explained Variance (First Two Components):
    • PC1: 25.77% of variance
    • PC2: 22.25% of variance

Interpretation:

  • The first two principal components explain ~48% of the total variance in the data.
  • This indicates that transparency perception, AI interaction, trust, and familiarity with XAI share common underlying factors, but no single dominant variable explains most of the variance.
  • This supports the idea that multiple factors contribute to trust in AI, rather than just transparency alone.

3. Multivariate Regression: Predicting Trust in AI

  • R-squared: 0.026 (very low predictive power)
  • Significant Predictors (p < 0> None
  • Regression Coefficients:
    • AI Interaction (p = 0.100): Slight positive relationship, but not statistically significant.
    • Transparency Perception (p = 0.340): No significant effect on trust.
    • Familiarity with XAI (p = 0.328): No significant effect on trust.
    • AI Understanding (p = 0.831): No significant effect on trust.
    • XAI Importance (p = 0.839): No significant effect on trust.

Interpretation:

  • None of the independent variables significantly predict trust in AI.
  • Transparency, AI familiarity, and AI understanding do not strongly influence trust levels when combined in a regression model.
  • This further reinforces that trust in AI is likely influenced by external factors (e.g., regulatory oversight, ethics, user experience), not just explainability.

Graphical Representations

Demographics Summary Table – Displays roles, experience, and AI interaction.

AI Understanding vs Transparency Perception (Bar Chart) – Highlights respondents' understanding of AI and their perception of transparency.

Trust in AI (Pie Chart) – Shows the distribution of trust levels in AI among respondents.

XAI Familiarity vs Importance (Comparative Bar Chart) – Compares familiarity with explainable AI (XAI) and its perceived importance.

Statistical Analysis Summary Table – Summarizes key statistical findings such as T-tests, ANOVA, correlations, and regression analysis.

Correlation Scatter Plot (AI Understanding, Transparency, and Trust) – Illustrates the relationship between AI understanding, transparency perception, and trust.


 

Answers On The Research Questions

Based on the statistical analysis presented in your document, here are validated answers to each research question:

1. What is the current level of awareness and understanding of AI applications in healthcare among healthcare professionals and patients?

  • Findings:
    • A significant portion of respondents reported a very high understanding of AI in healthcare.
    • 37% of respondents had never interacted with AI in healthcare.
    • 34.5% of respondents were familiar with Explainable AI (XAI).
  • Conclusion:
    • Awareness and understanding of AI in healthcare vary significantly. While some respondents report a high level of understanding, a large portion has limited or no direct interaction with AI-driven applications

2. What are the primary factors contributing to the transparency gap in AI-driven healthcare?

  • Findings:
    • The most commonly cited factors contributing to the lack of transparency were:
      • Complexity of AI algorithms
      • Lack of clear explanations from AI systems
      • Insufficient standardization in AI reporting
      • Algorithmic bias and data limitations
      • Limited regulatory oversight
  • Conclusion:
    • The transparency gap is largely driven by technical opacity, lack of standardized communication, and potential biases in AI decision-making.

3. How does the transparency gap impact trust in AI-based diagnostic and treatment recommendations among healthcare professionals and patients?

  • Findings:
    • Trust in AI was generally low, with "Unsure" being the most frequent response.
    • T-test results showed no statistically significant difference in trust levels between doctors and patients (p = 0.843).
    • Transparency perception did not strongly predict trust (correlation: r = 0.009).
    • The most commonly cited factor for increasing trust was "Regulation and ethical oversight of AI".
  • Conclusion:
    • The transparency gap does not necessarily drive trust. Instead, trust in AI is more influenced by regulatory oversight and ethical safeguards rather than just making AI more explainable.

 

4. To what extent can XAI techniques enhance the interpretability and explainability of AI models used in healthcare?

  • Findings:
    • Familiarity with XAI was the strongest predictor of trust in AI, with a borderline significant correlation (p = 0.068).
    • Transparency perception and AI understanding did not significantly impact trust.
    • Participants favored visual and interactive explainability methods such as:
      • AI-generated visual explanations (charts, graphs)
      • Plain-language summaries
      • Interactive tools to explore AI decisions
  • Conclusion:
    • XAI techniques improve interpretability but do not directly lead to increased trust. While they help healthcare professionals better understand AI decisions, other factors, such as ethical AI design and regulatory oversight, play a more critical role.

5. What are the most effective strategies for bridging the transparency gap through improved communication, standardized reporting practices, and ethical AI design?

  • Findings:
    • The most effective strategies for bridging the transparency gap were:
      • Developing AI systems that are inherently interpretable
      • Providing clear, standardized reporting of AI decisions
      • Increasing education and training on AI in healthcare
      • Improving regulations and ethical guidelines for AI use
      • Encouraging collaboration between AI developers and healthcare professionals
    • Respondents indicated they would be more willing to accept AI recommendations if provided with a clear, understandable explanation.
  • Conclusion:
    • A combination of standardized reporting, education, regulatory frameworks, and AI-human collaboration is essential for bridging the transparency gap. Simply making AI models more explainable is not enough—ethical considerations and regulatory oversight play a crucial role in ensuring trust.

Final Takeaway

The transparency gap in AI-driven healthcare is a complex issue that does not have a single solution. Trust is not solely dependent on explainability—ethical considerations, regulatory oversight, and better communication strategies are equally (if not more) important. Implementing XAI techniques helps improve interpretability, but a multifaceted approach including education, regulation, and collaboration is necessary to fully bridge the gap.

Recommendations

  1. Improve AI Education & Training: Increase awareness of XAI techniques among healthcare professionals.
  2. Enhance AI Communication Strategies: Provide clearer, user-friendly explanations rather than just technical transparency.
  3. Regulatory & Ethical Oversight: Implement policies that ensure AI-driven decisions are fair, ethical, and well-regulated.
  4. Personalization of AI Recommendations: Tailor AI explanations based on the audience's expertise level (e.g., doctors vs. patients).

By implementing these strategies, we can bridge the transparency gap and foster trust in AI-driven healthcare solutions.

Statistical Results of Additional Statistical Tests Refining the Findings

1. Chi-Square Test: Association Between Role and Trust in AI

  • Chi-Square Value: 16.62
  • P-Value: 0.410
  • Degrees of Freedom: 16

Interpretation:

  • The p-value (0.410) is greater than 0.05, indicating no statistically significant relationship between professional role (Doctor, Nurse, etc.) and trust in AI.
  • This suggests that trust levels in AI are similar across different roles, meaning doctors, nurses, allied health professionals, and patients do not significantly differ in their trust in AI.

2. Factor Analysis (PCA): Key Components of Transparency & Trust

  • Explained Variance (First Two Components):
    • PC1: 25.77% of variance
    • PC2: 22.25% of variance

Interpretation:

  • The first two principal components explain ~48% of the total variance in the data.
  • This indicates that transparency perception, AI interaction, trust, and familiarity with XAI share common underlying factors, but no single dominant variable explains most of the variance.
  • This supports the idea that multiple factors contribute to trust in AI, rather than just transparency alone.

3. Multivariate Regression: Predicting Trust in AI

  • R-squared: 0.026 (very low predictive power)
  • Significant Predictors (p < 0> None
  • Regression Coefficients:
    • AI Interaction (p = 0.100): Slight positive relationship, but not statistically significant.
    • Transparency Perception (p = 0.340): No significant effect on trust.
    • Familiarity with XAI (p = 0.328): No significant effect on trust.
    • AI Understanding (p = 0.831): No significant effect on trust.
    • XAI Importance (p = 0.839): No significant effect on trust.

Interpretation:

  • None of the independent variables significantly predict trust in AI.
  • Transparency, AI familiarity, and AI understanding do not strongly influence trust levels when combined in a regression model.
  • This further reinforces that trust in AI is likely influenced by external factors (e.g., regulatory oversight, ethics, user experience), not just explainability.

Graphical Representations

Demographics Summary Table – Displays roles, experience, and AI interaction.

AI Understanding vs Transparency Perception (Bar Chart) – Highlights respondents' understanding of AI and their perception of transparency.

Trust in AI (Pie Chart) – Shows the distribution of trust levels in AI among respondents.

XAI Familiarity vs Importance (Comparative Bar Chart) – Compares familiarity with explainable AI (XAI) and its perceived importance.

Statistical Analysis Summary Table – Summarizes key statistical findings such as T-tests, ANOVA, correlations, and regression analysis.

Correlation Scatter Plot (AI Understanding, Transparency, and Trust) – Illustrates the relationship between AI understanding, transparency perception, and trust.


 

Answers On The Research Questions

Based on the statistical analysis presented in your document, here are validated answers to each research question:

1. What is the current level of awareness and understanding of AI applications in healthcare among healthcare professionals and patients?

  • Findings:
    • A significant portion of respondents reported a very high understanding of AI in healthcare.
    • 37% of respondents had never interacted with AI in healthcare.
    • 34.5% of respondents were familiar with Explainable AI (XAI).
  • Conclusion:
    • Awareness and understanding of AI in healthcare vary significantly. While some respondents report a high level of understanding, a large portion has limited or no direct interaction with AI-driven applications

2. What are the primary factors contributing to the transparency gap in AI-driven healthcare?

  • Findings:
    • The most commonly cited factors contributing to the lack of transparency were:
      • Complexity of AI algorithms
      • Lack of clear explanations from AI systems
      • Insufficient standardization in AI reporting
      • Algorithmic bias and data limitations
      • Limited regulatory oversight
  • Conclusion:
    • The transparency gap is largely driven by technical opacity, lack of standardized communication, and potential biases in AI decision-making.

3. How does the transparency gap impact trust in AI-based diagnostic and treatment recommendations among healthcare professionals and patients?

  • Findings:
    • Trust in AI was generally low, with "Unsure" being the most frequent response.
    • T-test results showed no statistically significant difference in trust levels between doctors and patients (p = 0.843).
    • Transparency perception did not strongly predict trust (correlation: r = 0.009).
    • The most commonly cited factor for increasing trust was "Regulation and ethical oversight of AI".
  • Conclusion:
    • The transparency gap does not necessarily drive trust. Instead, trust in AI is more influenced by regulatory oversight and ethical safeguards rather than just making AI more explainable.

 

4. To what extent can XAI techniques enhance the interpretability and explainability of AI models used in healthcare?

  • Findings:
    • Familiarity with XAI was the strongest predictor of trust in AI, with a borderline significant correlation (p = 0.068).
    • Transparency perception and AI understanding did not significantly impact trust.
    • Participants favored visual and interactive explainability methods such as:
      • AI-generated visual explanations (charts, graphs)
      • Plain-language summaries
      • Interactive tools to explore AI decisions
  • Conclusion:
    • XAI techniques improve interpretability but do not directly lead to increased trust. While they help healthcare professionals better understand AI decisions, other factors, such as ethical AI design and regulatory oversight, play a more critical role.

5. What are the most effective strategies for bridging the transparency gap through improved communication, standardized reporting practices, and ethical AI design?

  • Findings:
    • The most effective strategies for bridging the transparency gap were:
      • Developing AI systems that are inherently interpretable
      • Providing clear, standardized reporting of AI decisions
      • Increasing education and training on AI in healthcare
      • Improving regulations and ethical guidelines for AI use
      • Encouraging collaboration between AI developers and healthcare professionals
    • Respondents indicated they would be more willing to accept AI recommendations if provided with a clear, understandable explanation.
  • Conclusion:
    • A combination of standardized reporting, education, regulatory frameworks, and AI-human collaboration is essential for bridging the transparency gap. Simply making AI models more explainable is not enough—ethical considerations and regulatory oversight play a crucial role in ensuring trust.

Final Takeaway

The transparency gap in AI-driven healthcare is a complex issue that does not have a single solution. Trust is not solely dependent on explainability—ethical considerations, regulatory oversight, and better communication strategies are equally (if not more) important. Implementing XAI techniques helps improve interpretability, but a multifaceted approach including education, regulation, and collaboration is necessary to fully bridge the gap.

Discussion

The study reveals a nuanced landscape of perceptions and attitudes toward AI in healthcare, highlighting the complexities surrounding trust, transparency, and the role of explainability. While the integration of AI holds immense promise for improving healthcare outcomes, its successful adoption hinges on addressing the concerns of healthcare professionals and patients.

7.1 Awareness and Understanding of AI

The survey data indicates a mixed level of awareness and understanding of AI applications in healthcare. While a notable proportion of respondents self-reported a high understanding, a significant number, particularly patients, have had limited direct interaction with AI-driven applications. This disparity suggests that while there is a growing awareness of AI's potential, practical exposure and understanding of its capabilities remain unevenly distributed. This lack of hands-on experience may contribute to skepticism and resistance to adopting AI-based recommendations.

7.2 The Transparency Gap: Multifaceted Challenges

The findings reinforce the existence of a significant transparency gap in AI-driven healthcare. This gap is not solely attributable to the technical complexity of AI algorithms but also stems from a lack of clear and accessible explanations, insufficient standardization in reporting, and concerns about algorithmic bias. The complexity of AI algorithms was identified as a major barrier to trust. While there is a demand for transparency, simply providing complex technical details may not be effective. The need for tailored and contextualized explanations is crucial.

7.3 Trust: Beyond Transparency

Contrary to initial expectations, the study revealed that transparency alone does not automatically translate to trust in AI-based recommendations. The correlation between transparency perception and trust was weak, suggesting that other factors play a more significant role. This finding challenges the common assumption that simply making AI more explainable will lead to increased acceptance and adoption. The most commonly cited factor for increasing trust was "Regulation and ethical oversight of AI." This suggests that confidence in AI systems is strongly tied to the perception that these systems are being developed and deployed responsibly, with safeguards in place to prevent harm and ensure fairness.

7.4 Explainable AI (XAI): A Promising but Not a Panacea

Familiarity with XAI techniques emerged as a potential factor influencing trust in AI. Respondents familiar with XAI were more likely to trust AI, suggesting that a better understanding of how AI makes decisions can increase confidence. The study also explored the preferred methods of XAI delivery. Participants favored visual and interactive explainability methods such as AI-generated visual explanations (charts, graphs), plain-language summaries, and interactive tools to explore AI decisions. These methods offer the potential to enhance comprehension and engagement with AI-driven insights.

7.5 Ethical Considerations and Bias Mitigation

The survey results underscore the importance of ethical considerations in AI-driven healthcare. The findings highlight a need for AI bias to be reduced. This can be done by using more diverse training data, regular audits for bias detection, clear guidelines on AI ethics, and a human review of AI decisions.

7.6 Communication is Key

The study stresses the importance of effective communication strategies for conveying AI-driven insights to both clinicians and patients. AI decision-making in healthcare should be transparent to healthcare professionals and patients. Clear, user-friendly explanations, tailored to the recipient's level of expertise, can enhance comprehension and acceptance of AI recommendations. The findings highlight the need for a shift from technical transparency to contextual explainability, focusing on the "why" behind AI decisions rather than just the "how."

Conclusion

This research provides valuable insights into the complex relationship between transparency, trust, and acceptance of AI in healthcare. The study's findings challenge the assumption that transparency alone is sufficient to foster trust. While explainability and XAI techniques play a crucial role in enhancing understanding, trust is ultimately shaped by broader factors, including regulatory oversight, ethical considerations, and effective communication strategies.

The study recommends focusing on AI education and training, enhancing AI communication strategies, having a regulatory and ethical oversight, and personalizing AI recommendations. Implementing these strategies can bridge the transparency gap and foster trust in AI-driven healthcare solutions.

The responsible and ethical deployment of AI in healthcare requires a multi-faceted approach that prioritizes transparency, explainability, fairness, and accountability. By addressing these challenges, we can harness the transformative potential of AI to improve healthcare outcomes and enhance patient well-being.

Limitations

This study has several limitations that should be considered when interpreting the findings.

  • Sample Size and Composition: The sample size of 200 respondents may limit the generalizability of the findings. Additionally, the composition of the sample, with varying levels of experience and roles, may introduce potential biases.
  • Self-Reported Data: The reliance on self-reported data, particularly regarding awareness and understanding of AI, may be subject to recall bias and social desirability bias.
  • Survey Design: The survey questions, while designed to be comprehensive, may not have captured the full range of perspectives and experiences related to AI in healthcare.
  • Focus on Perceptions: The study primarily focused on perceptions and attitudes, rather than objective measures of AI performance or the impact of AI on clinical outcomes.

Future Research Directions

This research opens several avenues for future investigation:

  • Longitudinal Studies: Conducting longitudinal studies to examine the evolution of trust and acceptance of AI in healthcare over time.
  • Comparative Studies: Comparing the effectiveness of different XAI techniques in enhancing trust and understanding among different user groups (e.g., doctors vs. patients).
  • Intervention Studies: Designing and evaluating interventions aimed at improving AI communication strategies and enhancing awareness of ethical considerations.
  • Evaluation of Real-World AI Deployments: Assessing the impact of real-world AI deployments on clinical outcomes, cost-effectiveness, and patient satisfaction.
  • Addressing Algorithmic Bias: Research on developing and implementing fairness-aware AI models and bias mitigation strategies to ensure equitable healthcare outcomes.
  • Regulatory Framework Development: Contributing to the development of ethical guidelines and regulatory frameworks for the responsible use of AI in healthcare.

References

Clearly Auctoresonline and particularly Psychology and Mental Health Care Journal is dedicated to improving health care services for individuals and populations. The editorial boards' ability to efficiently recognize and share the global importance of health literacy with a variety of stakeholders. Auctoresonline publishing platform can be used to facilitate of optimal client-based services and should be added to health care professionals' repertoire of evidence-based health care resources.

img

Virginia E. Koenig

Journal of Clinical Cardiology and Cardiovascular Intervention The submission and review process was adequate. However I think that the publication total value should have been enlightened in early fases. Thank you for all.

img

Delcio G Silva Junior

Journal of Women Health Care and Issues By the present mail, I want to say thank to you and tour colleagues for facilitating my published article. Specially thank you for the peer review process, support from the editorial office. I appreciate positively the quality of your journal.

img

Ziemlé Clément Méda

Journal of Clinical Research and Reports I would be very delighted to submit my testimonial regarding the reviewer board and the editorial office. The reviewer board were accurate and helpful regarding any modifications for my manuscript. And the editorial office were very helpful and supportive in contacting and monitoring with any update and offering help. It was my pleasure to contribute with your promising Journal and I am looking forward for more collaboration.

img

Mina Sherif Soliman Georgy

We would like to thank the Journal of Thoracic Disease and Cardiothoracic Surgery because of the services they provided us for our articles. The peer-review process was done in a very excellent time manner, and the opinions of the reviewers helped us to improve our manuscript further. The editorial office had an outstanding correspondence with us and guided us in many ways. During a hard time of the pandemic that is affecting every one of us tremendously, the editorial office helped us make everything easier for publishing scientific work. Hope for a more scientific relationship with your Journal.

img

Layla Shojaie

The peer-review process which consisted high quality queries on the paper. I did answer six reviewers’ questions and comments before the paper was accepted. The support from the editorial office is excellent.

img

Sing-yung Wu

Journal of Neuroscience and Neurological Surgery. I had the experience of publishing a research article recently. The whole process was simple from submission to publication. The reviewers made specific and valuable recommendations and corrections that improved the quality of my publication. I strongly recommend this Journal.

img

Orlando Villarreal

Dr. Katarzyna Byczkowska My testimonial covering: "The peer review process is quick and effective. The support from the editorial office is very professional and friendly. Quality of the Clinical Cardiology and Cardiovascular Interventions is scientific and publishes ground-breaking research on cardiology that is useful for other professionals in the field.

img

Katarzyna Byczkowska

Thank you most sincerely, with regard to the support you have given in relation to the reviewing process and the processing of my article entitled "Large Cell Neuroendocrine Carcinoma of The Prostate Gland: A Review and Update" for publication in your esteemed Journal, Journal of Cancer Research and Cellular Therapeutics". The editorial team has been very supportive.

img

Anthony Kodzo-Grey Venyo

Testimony of Journal of Clinical Otorhinolaryngology: work with your Reviews has been a educational and constructive experience. The editorial office were very helpful and supportive. It was a pleasure to contribute to your Journal.

img

Pedro Marques Gomes

Dr. Bernard Terkimbi Utoo, I am happy to publish my scientific work in Journal of Women Health Care and Issues (JWHCI). The manuscript submission was seamless and peer review process was top notch. I was amazed that 4 reviewers worked on the manuscript which made it a highly technical, standard and excellent quality paper. I appreciate the format and consideration for the APC as well as the speed of publication. It is my pleasure to continue with this scientific relationship with the esteem JWHCI.

img

Bernard Terkimbi Utoo

This is an acknowledgment for peer reviewers, editorial board of Journal of Clinical Research and Reports. They show a lot of consideration for us as publishers for our research article “Evaluation of the different factors associated with side effects of COVID-19 vaccination on medical students, Mutah university, Al-Karak, Jordan”, in a very professional and easy way. This journal is one of outstanding medical journal.

img

Prof Sherif W Mansour

Dear Hao Jiang, to Journal of Nutrition and Food Processing We greatly appreciate the efficient, professional and rapid processing of our paper by your team. If there is anything else we should do, please do not hesitate to let us know. On behalf of my co-authors, we would like to express our great appreciation to editor and reviewers.

img

Hao Jiang

As an author who has recently published in the journal "Brain and Neurological Disorders". I am delighted to provide a testimonial on the peer review process, editorial office support, and the overall quality of the journal. The peer review process at Brain and Neurological Disorders is rigorous and meticulous, ensuring that only high-quality, evidence-based research is published. The reviewers are experts in their fields, and their comments and suggestions were constructive and helped improve the quality of my manuscript. The review process was timely and efficient, with clear communication from the editorial office at each stage. The support from the editorial office was exceptional throughout the entire process. The editorial staff was responsive, professional, and always willing to help. They provided valuable guidance on formatting, structure, and ethical considerations, making the submission process seamless. Moreover, they kept me informed about the status of my manuscript and provided timely updates, which made the process less stressful. The journal Brain and Neurological Disorders is of the highest quality, with a strong focus on publishing cutting-edge research in the field of neurology. The articles published in this journal are well-researched, rigorously peer-reviewed, and written by experts in the field. The journal maintains high standards, ensuring that readers are provided with the most up-to-date and reliable information on brain and neurological disorders. In conclusion, I had a wonderful experience publishing in Brain and Neurological Disorders. The peer review process was thorough, the editorial office provided exceptional support, and the journal's quality is second to none. I would highly recommend this journal to any researcher working in the field of neurology and brain disorders.

img

Dr Shiming Tang

Dear Agrippa Hilda, Journal of Neuroscience and Neurological Surgery, Editorial Coordinator, I trust this message finds you well. I want to extend my appreciation for considering my article for publication in your esteemed journal. I am pleased to provide a testimonial regarding the peer review process and the support received from your editorial office. The peer review process for my paper was carried out in a highly professional and thorough manner. The feedback and comments provided by the authors were constructive and very useful in improving the quality of the manuscript. This rigorous assessment process undoubtedly contributes to the high standards maintained by your journal.

img

Raed Mualem

International Journal of Clinical Case Reports and Reviews. I strongly recommend to consider submitting your work to this high-quality journal. The support and availability of the Editorial staff is outstanding and the review process was both efficient and rigorous.

img

Andreas Filippaios

Thank you very much for publishing my Research Article titled “Comparing Treatment Outcome Of Allergic Rhinitis Patients After Using Fluticasone Nasal Spray And Nasal Douching" in the Journal of Clinical Otorhinolaryngology. As Medical Professionals we are immensely benefited from study of various informative Articles and Papers published in this high quality Journal. I look forward to enriching my knowledge by regular study of the Journal and contribute my future work in the field of ENT through the Journal for use by the medical fraternity. The support from the Editorial office was excellent and very prompt. I also welcome the comments received from the readers of my Research Article.

img

Dr Suramya Dhamija

Dear Erica Kelsey, Editorial Coordinator of Cancer Research and Cellular Therapeutics Our team is very satisfied with the processing of our paper by your journal. That was fast, efficient, rigorous, but without unnecessary complications. We appreciated the very short time between the submission of the paper and its publication on line on your site.

img

Bruno Chauffert

I am very glad to say that the peer review process is very successful and fast and support from the Editorial Office. Therefore, I would like to continue our scientific relationship for a long time. And I especially thank you for your kindly attention towards my article. Have a good day!

img

Baheci Selen

"We recently published an article entitled “Influence of beta-Cyclodextrins upon the Degradation of Carbofuran Derivatives under Alkaline Conditions" in the Journal of “Pesticides and Biofertilizers” to show that the cyclodextrins protect the carbamates increasing their half-life time in the presence of basic conditions This will be very helpful to understand carbofuran behaviour in the analytical, agro-environmental and food areas. We greatly appreciated the interaction with the editor and the editorial team; we were particularly well accompanied during the course of the revision process, since all various steps towards publication were short and without delay".

img

Jesus Simal-Gandara

I would like to express my gratitude towards you process of article review and submission. I found this to be very fair and expedient. Your follow up has been excellent. I have many publications in national and international journal and your process has been one of the best so far. Keep up the great work.

img

Douglas Miyazaki

We are grateful for this opportunity to provide a glowing recommendation to the Journal of Psychiatry and Psychotherapy. We found that the editorial team were very supportive, helpful, kept us abreast of timelines and over all very professional in nature. The peer review process was rigorous, efficient and constructive that really enhanced our article submission. The experience with this journal remains one of our best ever and we look forward to providing future submissions in the near future.

img

Dr Griffith

I am very pleased to serve as EBM of the journal, I hope many years of my experience in stem cells can help the journal from one way or another. As we know, stem cells hold great potential for regenerative medicine, which are mostly used to promote the repair response of diseased, dysfunctional or injured tissue using stem cells or their derivatives. I think Stem Cell Research and Therapeutics International is a great platform to publish and share the understanding towards the biology and translational or clinical application of stem cells.

img

Dr Tong Ming Liu

I would like to give my testimony in the support I have got by the peer review process and to support the editorial office where they were of asset to support young author like me to be encouraged to publish their work in your respected journal and globalize and share knowledge across the globe. I really give my great gratitude to your journal and the peer review including the editorial office.

img

Husain Taha Radhi

I am delighted to publish our manuscript entitled "A Perspective on Cocaine Induced Stroke - Its Mechanisms and Management" in the Journal of Neuroscience and Neurological Surgery. The peer review process, support from the editorial office, and quality of the journal are excellent. The manuscripts published are of high quality and of excellent scientific value. I recommend this journal very much to colleagues.

img

S Munshi

Dr.Tania Muñoz, My experience as researcher and author of a review article in The Journal Clinical Cardiology and Interventions has been very enriching and stimulating. The editorial team is excellent, performs its work with absolute responsibility and delivery. They are proactive, dynamic and receptive to all proposals. Supporting at all times the vast universe of authors who choose them as an option for publication. The team of review specialists, members of the editorial board, are brilliant professionals, with remarkable performance in medical research and scientific methodology. Together they form a frontline team that consolidates the JCCI as a magnificent option for the publication and review of high-level medical articles and broad collective interest. I am honored to be able to share my review article and open to receive all your comments.

img

Tania Munoz

“The peer review process of JPMHC is quick and effective. Authors are benefited by good and professional reviewers with huge experience in the field of psychology and mental health. The support from the editorial office is very professional. People to contact to are friendly and happy to help and assist any query authors might have. Quality of the Journal is scientific and publishes ground-breaking research on mental health that is useful for other professionals in the field”.

img

George Varvatsoulias

Dear editorial department: On behalf of our team, I hereby certify the reliability and superiority of the International Journal of Clinical Case Reports and Reviews in the peer review process, editorial support, and journal quality. Firstly, the peer review process of the International Journal of Clinical Case Reports and Reviews is rigorous, fair, transparent, fast, and of high quality. The editorial department invites experts from relevant fields as anonymous reviewers to review all submitted manuscripts. These experts have rich academic backgrounds and experience, and can accurately evaluate the academic quality, originality, and suitability of manuscripts. The editorial department is committed to ensuring the rigor of the peer review process, while also making every effort to ensure a fast review cycle to meet the needs of authors and the academic community. Secondly, the editorial team of the International Journal of Clinical Case Reports and Reviews is composed of a group of senior scholars and professionals with rich experience and professional knowledge in related fields. The editorial department is committed to assisting authors in improving their manuscripts, ensuring their academic accuracy, clarity, and completeness. Editors actively collaborate with authors, providing useful suggestions and feedback to promote the improvement and development of the manuscript. We believe that the support of the editorial department is one of the key factors in ensuring the quality of the journal. Finally, the International Journal of Clinical Case Reports and Reviews is renowned for its high- quality articles and strict academic standards. The editorial department is committed to publishing innovative and academically valuable research results to promote the development and progress of related fields. The International Journal of Clinical Case Reports and Reviews is reasonably priced and ensures excellent service and quality ratio, allowing authors to obtain high-level academic publishing opportunities in an affordable manner. I hereby solemnly declare that the International Journal of Clinical Case Reports and Reviews has a high level of credibility and superiority in terms of peer review process, editorial support, reasonable fees, and journal quality. Sincerely, Rui Tao.

img

Rui Tao

Clinical Cardiology and Cardiovascular Interventions I testity the covering of the peer review process, support from the editorial office, and quality of the journal.

img

Khurram Arshad

Clinical Cardiology and Cardiovascular Interventions, we deeply appreciate the interest shown in our work and its publication. It has been a true pleasure to collaborate with you. The peer review process, as well as the support provided by the editorial office, have been exceptional, and the quality of the journal is very high, which was a determining factor in our decision to publish with you.

img

Gomez Barriga Maria Dolores

The peer reviewers process is quick and effective, the supports from editorial office is excellent, the quality of journal is high. I would like to collabroate with Internatioanl journal of Clinical Case Reports and Reviews journal clinically in the future time.

img

Lin Shaw Chin

Clinical Cardiology and Cardiovascular Interventions, I would like to express my sincerest gratitude for the trust placed in our team for the publication in your journal. It has been a true pleasure to collaborate with you on this project. I am pleased to inform you that both the peer review process and the attention from the editorial coordination have been excellent. Your team has worked with dedication and professionalism to ensure that your publication meets the highest standards of quality. We are confident that this collaboration will result in mutual success, and we are eager to see the fruits of this shared effort.

img

Maria Dolores Gomez Barriga

Dear Dr. Jessica Magne, Editorial Coordinator 0f Clinical Cardiology and Cardiovascular Interventions, I hope this message finds you well. I want to express my utmost gratitude for your excellent work and for the dedication and speed in the publication process of my article titled "Navigating Innovation: Qualitative Insights on Using Technology for Health Education in Acute Coronary Syndrome Patients." I am very satisfied with the peer review process, the support from the editorial office, and the quality of the journal. I hope we can maintain our scientific relationship in the long term.

img

Dr Maria Dolores Gomez Barriga

Dear Monica Gissare, - Editorial Coordinator of Nutrition and Food Processing. ¨My testimony with you is truly professional, with a positive response regarding the follow-up of the article and its review, you took into account my qualities and the importance of the topic¨.

img

Dr Maria Regina Penchyna Nieto

Dear Dr. Jessica Magne, Editorial Coordinator 0f Clinical Cardiology and Cardiovascular Interventions, The review process for the article “The Handling of Anti-aggregants and Anticoagulants in the Oncologic Heart Patient Submitted to Surgery” was extremely rigorous and detailed. From the initial submission to the final acceptance, the editorial team at the “Journal of Clinical Cardiology and Cardiovascular Interventions” demonstrated a high level of professionalism and dedication. The reviewers provided constructive and detailed feedback, which was essential for improving the quality of our work. Communication was always clear and efficient, ensuring that all our questions were promptly addressed. The quality of the “Journal of Clinical Cardiology and Cardiovascular Interventions” is undeniable. It is a peer-reviewed, open-access publication dedicated exclusively to disseminating high-quality research in the field of clinical cardiology and cardiovascular interventions. The journal's impact factor is currently under evaluation, and it is indexed in reputable databases, which further reinforces its credibility and relevance in the scientific field. I highly recommend this journal to researchers looking for a reputable platform to publish their studies.

img

Dr Marcelo Flavio Gomes Jardim Filho

Dear Editorial Coordinator of the Journal of Nutrition and Food Processing! "I would like to thank the Journal of Nutrition and Food Processing for including and publishing my article. The peer review process was very quick, movement and precise. The Editorial Board has done an extremely conscientious job with much help, valuable comments and advices. I find the journal very valuable from a professional point of view, thank you very much for allowing me to be part of it and I would like to participate in the future!”

img

Zsuzsanna Bene

Dealing with The Journal of Neurology and Neurological Surgery was very smooth and comprehensive. The office staff took time to address my needs and the response from editors and the office was prompt and fair. I certainly hope to publish with this journal again.Their professionalism is apparent and more than satisfactory. Susan Weiner

img

Dr Susan Weiner

My Testimonial Covering as fellowing: Lin-Show Chin. The peer reviewers process is quick and effective, the supports from editorial office is excellent, the quality of journal is high. I would like to collabroate with Internatioanl journal of Clinical Case Reports and Reviews.

img

Lin-Show Chin

My experience publishing in Psychology and Mental Health Care was exceptional. The peer review process was rigorous and constructive, with reviewers providing valuable insights that helped enhance the quality of our work. The editorial team was highly supportive and responsive, making the submission process smooth and efficient. The journal's commitment to high standards and academic rigor makes it a respected platform for quality research. I am grateful for the opportunity to publish in such a reputable journal.

img

Sonila Qirko

My experience publishing in International Journal of Clinical Case Reports and Reviews was exceptional. I Come forth to Provide a Testimonial Covering the Peer Review Process and the editorial office for the Professional and Impartial Evaluation of the Manuscript.

img

Luiz Sellmann

I would like to offer my testimony in the support. I have received through the peer review process and support the editorial office where they are to support young authors like me, encourage them to publish their work in your esteemed journals, and globalize and share knowledge globally. I really appreciate your journal, peer review, and editorial office.

img

Zhao Jia

Dear Agrippa Hilda- Editorial Coordinator of Journal of Neuroscience and Neurological Surgery, "The peer review process was very quick and of high quality, which can also be seen in the articles in the journal. The collaboration with the editorial office was very good."

img

Thomas Urban

I would like to express my sincere gratitude for the support and efficiency provided by the editorial office throughout the publication process of my article, “Delayed Vulvar Metastases from Rectal Carcinoma: A Case Report.” I greatly appreciate the assistance and guidance I received from your team, which made the entire process smooth and efficient. The peer review process was thorough and constructive, contributing to the overall quality of the final article. I am very grateful for the high level of professionalism and commitment shown by the editorial staff, and I look forward to maintaining a long-term collaboration with the International Journal of Clinical Case Reports and Reviews.

img

Cristina Berriozabal

To Dear Erin Aust, I would like to express my heartfelt appreciation for the opportunity to have my work published in this esteemed journal. The entire publication process was smooth and well-organized, and I am extremely satisfied with the final result. The Editorial Team demonstrated the utmost professionalism, providing prompt and insightful feedback throughout the review process. Their clear communication and constructive suggestions were invaluable in enhancing my manuscript, and their meticulous attention to detail and dedication to quality are truly commendable. Additionally, the support from the Editorial Office was exceptional. From the initial submission to the final publication, I was guided through every step of the process with great care and professionalism. The team's responsiveness and assistance made the entire experience both easy and stress-free. I am also deeply impressed by the quality and reputation of the journal. It is an honor to have my research featured in such a respected publication, and I am confident that it will make a meaningful contribution to the field.

img

Dr Tewodros Kassahun Tarekegn

"I am grateful for the opportunity of contributing to [International Journal of Clinical Case Reports and Reviews] and for the rigorous review process that enhances the quality of research published in your esteemed journal. I sincerely appreciate the time and effort of your team who have dedicatedly helped me in improvising changes and modifying my manuscript. The insightful comments and constructive feedback provided have been invaluable in refining and strengthening my work".

img

Dr Shweta Tiwari

I thank the ‘Journal of Clinical Research and Reports’ for accepting this article for publication. This is a rigorously peer reviewed journal which is on all major global scientific data bases. I note the review process was prompt, thorough and professionally critical. It gave us an insight into a number of important scientific/statistical issues. The review prompted us to review the relevant literature again and look at the limitations of the study. The peer reviewers were open, clear in the instructions and the editorial team was very prompt in their communication. This journal certainly publishes quality research articles. I would recommend the journal for any future publications.

img

Dr Farooq Wandroo

Dear Jessica Magne, with gratitude for the joint work. Fast process of receiving and processing the submitted scientific materials in “Clinical Cardiology and Cardiovascular Interventions”. High level of competence of the editors with clear and correct recommendations and ideas for enriching the article.

img

Dr Anyuta Ivanova

We found the peer review process quick and positive in its input. The support from the editorial officer has been very agile, always with the intention of improving the article and taking into account our subsequent corrections.

img

Dr David Vinyes

My article, titled 'No Way Out of the Smartphone Epidemic Without Considering the Insights of Brain Research,' has been republished in the International Journal of Clinical Case Reports and Reviews. The review process was seamless and professional, with the editors being both friendly and supportive. I am deeply grateful for their efforts.

img

Gertraud Teuchert-Noodt

To Dear Erin Aust – Editorial Coordinator of Journal of General Medicine and Clinical Practice! I declare that I am absolutely satisfied with your work carried out with great competence in following the manuscript during the various stages from its receipt, during the revision process to the final acceptance for publication. Thank Prof. Elvira Farina

img

Dr Elvira Farina

Dear Jessica, and the super professional team of the ‘Clinical Cardiology and Cardiovascular Interventions’ I am sincerely grateful to the coordinated work of the journal team for the no problem with the submission of my manuscript: “Cardiometabolic Disorders in A Pregnant Woman with Severe Preeclampsia on the Background of Morbid Obesity (Case Report).” The review process by 5 experts was fast, and the comments were professional, which made it more specific and academic, and the process of publication and presentation of the article was excellent. I recommend that my colleagues publish articles in this journal, and I am interested in further scientific cooperation. Sincerely and best wishes, Dr. Oleg Golyanovskiy.

img

Dr Oleg Golyanovski