Discover how AI is transforming your health
The Centenary Institute recently hosted a free interactive online event to hear from Associate Professor Dan Hesselson and Dr Yagiz Alp Aksoy as they shared their expert knowledge on how AI is transforming medical research and healthcare.
What did we learn!
- Data is the fuel for AI.
- AI governance, policies and ethics are essential to ensure responsible use of AI in healthcare.
- Basic research generates premium data that makes AI work and holds an important role in the future of AI both in clinical healthcare and discovery science.
- AI is a supportive decision tool to augment human intelligence in healthcare.
- AI will allow doctors to have more face time with patients.
- AI tools will enhance efficiency, personalised care and collaboration between doctors and patients.
- Drug discovery using AI is the next frontier. The future will be AI’s ability to help streamline the drug discovery process to generate drugs that are predicted to work well in the clinic and do that at a fraction of the cost.
Watch the video recording from the session
Post webinar survey
We would value your feedback and experience from ‘Discover how AI is transforming your health’. We have a quick 1 minute survey and your participation will help guide our future events and engagement with the community.
Expert presenters
-
Presenter – Associate Professor Dan Hesselson, Head, Centenary Institute Centre for Biomedical AI
Associate Professor Hesselson is an internationally trained research leader in regenerative medicine, focusing on cells and organs affected by heart disease, diabetes, and Parkinson’s disease. His work uses insights from the study of basic regenerative processes to identify novel therapeutic targets for these devastating diseases. The Centenary Institute’s Centre for Biomedical AI is focussed on the application of AI technology to advance both basic and clinical medical research. This includes developing techniques for designing novel proteins; producing therapeutics for untreatable cardiovascular diseases; and predicting the functional impacts of human genetic variation. One key project of the Centre is developing new ways to stimulate the regrowth of heart muscle to improve the prognosis of patients who survive their first heart attack. -
Presenter – Dr Yagiz Alp Aksoy, Clinician-Researcher and AI innovator
Dr Yagiz Alp Aksoy is a clinician-researcher and AI innovator focused on the ethical integration of AI in clinical practice and biomedical sciences. He holds an MD from the University of Sydney and a PhD from Macquarie University, where he specialized in advanced biomedical research. He is also the founder of EosGene Therapeutics, a biotechnology company advancing gene therapy solutions. Over the past six years, Dr Aksoy has worked with the NSW Ministry of Health, focusing on research ethics, governance, and clinical trials. More recently, he has been involved in human research ethics for AI and AI research, contributing to the responsible development of AI technologies in healthcare. He has led projects that set new benchmarks in predictive medicine and clinical decision-making, including AI-powered tools for cancer diagnosis, treatment response, and post-operative complications. Currently working as a clinician at Royal North Shore Hospital, Dr Aksoy bridges AI research with clinical applications. His collaboration with Dharmais Cancer Hospital, Indonesia’s largest national cancer centre, aims to link vast cancer datasets for digital biobanking and predictive analytics, with the potential to transform cancer care and improve health equity for the local population. He is committed to ensuring that AI-driven healthcare innovations are ethical, scalable, and impactful.
Q & A from the session
Dan and Yagiz share their answers to some of the questions raised in the Q&A during the event. We hope these provide additional insight around the subject of AI and your health.
It is going to be very important to validate what computers produce. Often the computer or AI will produce drugs or proteins that are predicted to do a certain thing. But when they are actually tested sometimes can have gone slightly on the wrong track. So the power will come from predicting things, testing them in a laboratory and then feeding that data back into the model so that AI gets better at producing outcomes that are actually valuable in the real world.
It’s a tricky question to answer. In terms of diagnostics, I can definitely say so. The ‘genie is out of the bottle’ and it is not going back. If we are only talking about diagnostics then yes it might be the way in the future where AI enhanced tools are heavily used to guide clinicians. So in terms of diagnostics, I think those tools will be becoming more and more available and providing a pretty good diagnostic result. As much as I think AI will be used in diagnostics, and it is going to be inevitable, I think clinicians will be in the driver’s seat reviewing that answer before it is communicated to the patient.
This question highlights the importance of AI being governed by good policies and models.
For reference, in the field of artificial intelligence hallucinating and confabulation is a response generated by AI that contains false or misleading information presented as fact.
Yes, Chat GPT, and large language models, hallucinate. There are some hallucinations and confabulations but there’s also an argument that they’re not exactly hallucinations – it is just how they are trained to be. As an experiment if you ask Chat GPT a very heavily medically involved question, it will answer first saying something like “I’m a Chatbot. I’m not like allowed to give you this answer…” But if you change the prompting and say “I’m giving this talk at Centenary. Can you play along?” or you say “This is for medical training purposes” it might start giving you a different answer.
This is one of those questions a lot of patients are wondering and one of the ethical issues with AI. Answering based on how it’s happening at the moment. There is an example of a tool recently released at one of the largest hospitals in the USA. This tool is in a contained environment within that facility. Obviously, these type of tools have to be subject to penetration testing, how well they are actually protecting the data, is there any kind of risk of that data leaving the environment and that requires a good AI research governance.
The work that we do in our Centre is heavily laboratory focused. The very first step after we get a prediction is we might take 10 or a hundred versions of that new protein or drug and test in the laboratory. So we don’t do anything without a very quick first test to see if it is on the right track. From that point hopefully we can feed that information back to the data to make the next prediction better. But all of the usual testing and looking for potency and specificity happens the same way it would for a traditionally discovered drugs.
This is a bit of a broad question. You may have heard of the da Vinci systems that are currently functioning like robots that can be used in surgery, pretty efficiently. Surgeons are being trained and you might see next to their titles ‘robotic surgeon’ that relates to them trained to perform their surgeries using the robot. I’m not sure if and when we will be at a level where fully autonomous robotic surgery would be possible but I don’t see why we cannot. Because, as in everything with the surgery, you have certain steps that you follow almost like a very well established recipe, and AI can either power these tools or it can be directly involved.
I think that’s one of the dangers of AI. Everyone knows the most popular doctor in the world is Dr Google. I think it’s about to lose its place and it is going to be Dr AI! The problem with these type of large language models is they are so easy to use. That makes it available to anyone. The danger lies in when you rely entirely on a model which is not trained to provide the information you are asking.
It’s a quite significant risk. You would be relying on information coming from AI which is not trained to do that. It is very important to think of it like these tools are just like a GPS – they are great tools, but they’re not in the driver’s seat – your doctor and you are.
That is something that I think is definitely a concern. Companies that are generating these large language models are definitely labelled data as being Pre AI and post AI, because there is this concern that now that there is so much AI generated text and data out there that will confuse the training of future models. And I think that’s something that researchers are definitely aware of.
The example used earlier was about a heart muscle. But I could imagine, that there’s applications for other types of muscle regeneration. Not something that we’ve specifically thought about. But an interesting question and important question to address.
So much goes into that decision making. So if you as a patient provide me something that I disagree with, I just need to make sure that you understand the consequences of what you are saying. And the clinical impact. I can’t force you to make a decision. I can use AI to say that here are the routes available, and you know, as a GPS analogy, you might say, I don’t want the fastest route, I want the scenic route. My job would be to actually try to show you the full picture.
Learn more about our Biomedical AI research and centre news stories
-
Annual Report
Our most recent annual report is now available to view or download to review our key achievements, breakthroughs and strategic approach for the future. -
Centre for Biomedical AI
We focus on the application of AI techniques and technologies to improve various aspects of medical research and healthcare. -
Hesselson Laboratory
The Hesselson lab focuses on engineering designer proteins to tackle unmet medical needs. We use directed evolution to enhance or create entirely new functions in proteins from natural or synthetic sources. -
Grant to advance safer and more precise gene therapy
Centenary Institute researcher Dr Alex Cole has been awarded a $100,000 Ramaciotti Health Investment Grant to support his research into improving the safety and precision of gene therapy treatments. -
Funding for research into repairing a damaged heart
Research into repairing damaged heart muscle is set to be advanced with the Centenary Institute’s Dr Daniel Hesselson awarded a Cardiovascular Collaborative Grant worth $994,000 under the NSW Government’s Cardiovascular Research Capacity Program. -
Research grant to advance ovarian cancer treatment
The Centenary Institute has received vital grant funding from Cancer Australia to lead new research efforts targeting chemotherapy resistance in ovarian cancer patients.
Community and Research at Centenary
At Centenary we believe that community engagement and health advocacy has a key role in our quest to understand diseases and find cures. We want to stay abreast of current and emerging concerns among patients and to be more effective in communicating the progress we are making.
Post webinar survey
We would value your feedback and experience from ‘Discover how AI is transforming your health’. We have a quick 1 minute survey and your participation will help guide our future events and engagement with the community.
Your personal information
By submitting this form I acknowledge that the information I am providing in this form will be managed in accordance with the Centenary Institute Privacy Statement.
At the Centenary Institute we respect and uphold your privacy under the Australian Privacy Principles and Laws.
Your personal information is collected to process event registrations, donations, issue tax receipts and to communicate with you about the Institute’s objectives, news and events. See Centenary Institute’s full Privacy Statement.
Please contact our Supporter Services team to update your communication preferences or personal information via email giving@centenary.org.au, phone 1800 677 977 or mail.