The Ethics of Future Medical Technology
Plus details on our upcoming events: a workshop on how to evaluate an argument, and a café and a class on grief. We also have suggested readings and resources, and more!
Introduction
By Marybel Menzies
Let’s face it—stories of futuristic medical technology gone awry are everywhere in pop culture. Who can forget HAL from 2001: A Space Odyssey turning on astronauts David and Frank, or Sonny from I, Robot suddenly turning on Detective Spooner? And then there’s Gattaca, which dives into the unsettling world of genetic engineering and its eugenic possibilities. These cautionary tales make it pretty clear: when it comes to new medical tech, a little skepticism is probably wise.
But here’s the twist—despite all these warnings, we keep pushing forward. Why? Well, the potential upsides are just as impossible to ignore. Picture this: what if we had diagnostic tools so accurate, they could catch diseases before they even had a chance to take hold? Or imagine being able to edit out genetic disorders like sickle cell anemia entirely, offering hope where there was none before.
That’s the promise these technologies hold. Sure, the risks are real, and the ethical dilemmas aren’t going away any time soon. But if these innovations could save countless lives, isn’t it worth at least considering the gamble? As we discussed last Tuesday, though, the answer isn’t as simple as it might seem. But first:
Featured Content:
Curiosity Café Recap: The Ethics of Future Medical Technology
Community Survey
Upcoming Events
Our Next Workshop: How to Evaluate an Argument (June 21)
Our Last Class for Summer 2025: The Experience of Grief (July 5)
Our Next Café: Grief (June 24)
Readings & Resources
Curiosity Café Recap: The Ethics of Future Medical Technology
By Marybel Menzies

Our latest Curiosity Café, moderated by Lea Love and Marybel Menzies, began with a discussion of the latest emerging medical technologies, diagnostic artificial intelligence and genetic engineering technologies. These technologies have introduced new questions and issues into the clinical ethical landscape. Currently, clinicians make ethical decisions by applying four basic principles, which we briefly covered. These include:
Autonomy: Individuals have the right to guide their own decision-making. Their values and preferences must be respected. This includes the right to informed consent.
Non-maleficence: This is the “do-no-harm” principle. Clinicians must minimize risk to patients.
Beneficence: This is the “best interest” principle. Clinicians must aim to benefit patients.
Justice: This is the “equity” principle. Clinicians must aim to distribute resources fairly.
The first half of our discussion focused on how to apply these principles to solve issues surrounding the introduction of AI-assisted diagnostic tools. Our moderators asked us to consider the following scenario:
A hospital has implemented an AI system that can diagnose patients and recommend treatments. The AI has shown a high success rate, often higher than human doctors. However, there have been instances where patients disagree with the AI’s recommendations. Furthermore, there have been cases where a patient’s condition worsened after following the AI’s advice.
From here, the conversation took off.
In general, most people said they were overall pro-AI-assisted diagnosis. Nevertheless, this was dependent on a number of factors:
The AI must have safety checks and be tested for accuracy. After all, we are constantly innovating medical technologies, and if AI is just like previous innovations, we should welcome it.
Additionally, AI is required to meet a privacy policy. As long as we can trust that the medical data AI collects on us is protected, the technology is safe to use.
Furthermore, if AI is more reliable than a physician's diagnosis, then that is another reason to prefer its use over fallible human judgment. For instance, AI can be trained to exhibit less, say, racial bias than human doctors currently have.
However, as noted by others, it is not clear that AI will necessarily have all of the safety precautions established, especially if it is primarily being devised and implemented by big tech companies with monetary incentives.
In other words, if Meta and Apple are the primary stakeholders when it comes to the medical application of AI, then AI technology cannot be trusted to have our best interests at heart and protect our medical information. On the contrary, there is a high likelihood that these companies will seek to capitalize on our information and sell it to the highest bidder.
Not only that, but AI has already developed a reputation for deception and fabrication. It seems putting our trust in AI diagnosis, as it currently stands, would be far too hasty.
In response to this concern, though, AI has demonstrated an incredibly high degree of accuracy in detecting particular conditions such as malignant cancers and therefore is much better positioned to save lives, regardless of its apparent shortcomings.
Additionally, it has already surpassed radiological benchmarks for the detection of various tumours and even benign abnormalities.
One community member working as a doctor noted that they remembered when doctors knew everything about their patients and were able to build a personal relationship with them. This seems to be much less common nowadays. In their view, this seems crucial to the effective practice of medicine. The connections that people create between one another can help with the prevention of illness, and accurate diagnoses, too. Moreover, there are effective holistic practices that may even be more beneficial than modern medicine, which AI would likely fail to recommend.
The concerns about privacy and AI arose over and over. That is, if AI has access to our biomedical information, then this can easily be used for malicious purposes, such as political discrimination based on ethnic background.
Thus, while we may be a ways away from having AI with malicious intentions, it is nevertheless easy to see how humans with malicious intentions could misuse AI as it stands, and we should be disconcerted about this possibility.
So, how should AI be integrated into medicine?
AI must be accountable. When mistakes happen, we must be able to make sure they don’t happen again.
Additionally, good data is really important. If you’re trying to minimize your bias, you’re doing better.
Further, Copilot (Microsoft) and OpenAI should probably not be trusted, regardless of whether they are independent or doctor-owned owned because of the way they are incentivized.
Instead of using AI as a diagnostic assistant, AI could be used as a tool to make information provided by doctors about a patient’s diagnosis available to them in a more accessible manner, for instance by providing an easy-to-understand summary of their medical charts.
The second half of our conversation focused on how to apply these principles to solve issues surrounding the use of genetic engineering in disease prevention and trait enhancement. Our moderators asked us to consider the following scenario:
Imagine that in the near future, genetic engineering technology allows parents to select specific traits for their children before birth. These include traits such as intelligence, physical abilities, and appearance. Should we?
Here’s what some of you had to say:
When it comes to genetic diseases and anything else that increases pain and suffering, we should seriously consider using genetic engineering technologies. However, physical traits like height or hair colour would not count because it’s unclear whether they cause genuine suffering.
Furthermore, by genetically modifying physical traits, we would be increasing bias based on physical appearance.
In the 1950s to 60s, Thalidomide was a widely used drug for the treatment of nausea in pregnant women. It became apparent in the 1960s that thalidomide treatment resulted in severe birth defects in thousands of children. Given the possibility of these kinds of severe consequences with the implementation of new medical technologies, we should be equally careful when it comes to the application of anything with the possibility of similarly radical consequences, like genetic engineering.
Another huge concern with these technologies is the potential erasure of certain communities, such as the deaf community. These communities have unique cultural practices, and if we value diversity, then we should care about keeping these practices alive.
On the plus side, though, we’ve already eradicated some illnesses such as smallpox. It seems permissible to use genetic engineering in these cases. It would be ethical to remove the gene for this—in the event there was one.
That said, while preserving life is a widely accepted moral maxim, there is no clear moral imperative to enhance human traits beyond addressing disease or suffering. If a biologically healthy condition exists, pursuing it may be justified, but moving beyond that enters the realm of eugenics, raising ethical concerns about what constitutes a “healthy” or “ideal” person.
Importantly, one community member noted that there is a key distinction between genetic and cultural enhancement. Genetic traits can be passed down through generations, making them relatively stable, while culture is more fluid and less tangible. When considering enhancements, especially in children, the question arises: who should decide what is valuable to enhance—the individual, their parents, or society at large?
What counts as “enhancement”? When this term is based on, for instance, societal beauty standards, we run the risk of further entrenching deeply problematic and exclusionary ideals and resulting hierarchies, and simultaneously erasing uniquely valuable differences.
On a related note, genetically engineered human traits are different from the kind of genetically modified crops we are familiar with.
For instance, corn can be engineered for disease resistance, but humans are more complex, and modifying physical traits could have unintended consequences. If society values diversity, allowing genetic modification may undermine this principle and lead to a loss of unique individual differences.
On the topic of regulating genetic engineering, they must be democratically accountable—the government enforces regulations, but it is ultimately answerable to the people who elect it.
Maybe government officials aren’t necessarily in the best position to assess how to regulate the medical usage of this technology. Rather, the final authority should be given to physicians who are directly acquainted with the needs of patients.
Community Survey
For those who’ve attended our Curiosity Cafés, please consider taking our brief community survey. We are conducting this survey to gather feedback on our events, and it should only take a few minutes to complete. Your responses are completely anonymous and will be invaluable in helping us improve our offerings. Thank you in advance!
Upcoming Events
Curiosity in Session is Being and Becoming’s new educational initiative. It seeks to facilitate public access to philosophy while increasing public philosophical literacy through alternating bi-weekly (on alternate weeks to cafés) Curiosity Classes and Philosophical Skills Workshops.
Last Curiosity Class, we gathered at the High Park branch of the Toronto Public Library to discuss the intersection between social harms and cancel culture. Featuring a mini-café on social harms in the first half and an evolving discussion of Millman’s proposal in the second, together we explored the social and power dynamics at play in cancellation.
Wish you could have been there? Or maybe you have attended our first Philosophical Skills Workshop on How to Reconstruct an Argument and want to know what’s next?
Then join us on Saturday, June 21st, from 1:30 to 3:30 p.m. for our next Philosophical Skills Workshop (an interactive, activity-based session with no preparation necessary) on How to Evaluate an Argument:
We will host our next Philosophical Skills Workshop on Saturday, June 21st, from 1:30 to 3:30 p.m. at the High Park Branch of the Toronto Public Library (High Park Meeting Room, 228 Roncesvalles Ave., Toronto, ON, M6R 2L7). Come and check in from 1:20 to 1:30 p.m. The class will run from 1:30 to 3:30 p.m. with a 10-minute break in the middle!
Missing the event description? Head over to our Eventbrite page! We’re trying to make the length of our newsletters a little less overwhelming.
Believe it or not, we have just one Curiosity Class left for Summer 2025!
Take advantage of our July-only BOGO summer sale: bring a friend and share the love of wisdom with them for free!
Don’t miss out: Join us on Saturday, July 5th, from 1:00 to 3:00 p.m. (note the time change) for our final Curiosity Class of Summer 2025: The Experience of Grief
We will host our final Curiosity Class of Summer 2025 on Saturday, July 5th, from 1:00 to 3:00 p.m. at the Parkdale Branch of the Toronto Public Library (Parkdale Auditorium, 1303 Queen St. West, Toronto, ON, M6K 1L6). Come and check in from 12:50 to 1:00 p.m. The class will run from 1:00 to 3:00 p.m. with a 10-minute break in the middle!
Missing the event description? Head over to our Eventbrite page! We’re trying to make the length of our newsletters a little less overwhelming.
Curiosity Café: Bi-weekly on Tuesdays, tickets* below!
*Psst: hey community! Sophia here, Director of Community Programming. Right now, we are working towards the goal of providing those who contribute to making the cafés happen equitable financial compensation for the work we do, so that we can feasibly and sustainably develop as an organization and provide more programming for all of you. We also want to keep our cafés as accessible as possible, so we are keeping the pay-what-you-can model in place for tickets. We ask that you consider that our ideal recommended amount for a ticket is now $15, and as you choose what to donate within the bounds of your own financial means, we ask that you keep in mind that your donation will go directly towards compensating those involved in making Curiosity Cafés happen so that we can keep doing this for you.
We will be hosting our next Curiosity Café on Tuesday, June 24th, from 6:00 - 8:30 pm at the Madison Avenue Pub (14 Madison Ave, Toronto, ON M5R 2S1). Come and hang out with us, grab food, and read through our handout from 6:00 - 6:30 pm. Our structured discussion will run from 6:30 - 8:30 pm with a 10 minute break in the middle!
The topic of our next café is: Grief
Missing the event description? Head over to our Eventbrite page! We’re trying to make the length of our newsletters a little less overwhelming.
If you have accessibility-related concerns, please visit our Eventbrite page—in the event description, you will find some accessibility-related information about the venue and the event.
We still have five free tickets available for our attendees. If paying anything at all is not financially feasible for you or our ticketing system presents some other barrier, please contact our Director of Community Programming, Sophia, at sophia@beingnbecoming.org. These tickets will be given away on a first-come, first-served basis, no questions asked! You can expect to hear back from her within 72 hours.
Readings & Resources
Marybel’s Recommendation:
A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We’re Going by Michael Muthukrishna
While not strictly related to the ethics of future medical technology, the recently published book I’ve been reading takes you through how humanity has advanced to our current innovative heights, and provides some suggestions for how to restructure governance to help us anticipate upcoming challenges. Roughly, Muthukrishna argues that four “laws of life” help explain how humanity has gotten to where it is, and also can help us anticipate where to go next. Those laws of life include: “energy, innovation, cooperation, and the forces of evolution that shape all three.” Using these laws, we can better predict how technology and innovation will develop and determine how to better harness those innovations for the benefit of all.
Featured Quote
Science in the service of humanity is technology, but lack of wisdom may make the service harmful.
- Isaac Asimov
Our mission is to present a diversity of perspectives and views. The views and opinions expressed in this newsletter are solely those of the individual authors and do not necessarily reflect the views or opinions of Being and Becoming. Being and Becoming disclaims any responsibility for the content and opinions presented in the newsletter, as they are the exclusive responsibility of the respective authors. If you disagree with any of those presented herein, and you feel so inclined, we recommend reaching out to the original author and asking them how they came to hold that opinion. It’s a great conversation starter.