CancerHealth

Targeted, high-energy cancer treatments get a supercomputing boost

Radiation therapy shoots high-energy particles into the body to destroy or damage cancer cells. Over the last century, the technologies used have constantly improved and it has become a highly effective way to treat cancer. However, physicians must still walk a fine line between delivering enough radiation to kill tumors, while sparing surrounding healthy tissue.

“Historically, radiation has been a blunt tool,” said Matt Vaughn, Director of Life Science Computing at the Texas Advanced Computing Center. “However, it’s become ever more precise because we understand the physics and biology of systems that we’re shooting radiation into, and have improved our ability to target the delivery of that radiation.”

The science of calculating and assessing the radiation dose received by the human body is known as dosimetry — and here, as in many areas of science, advanced computing plays an important role.

Improving Radiation Therapy With Real-Time Imaging

Current radiation treatments rely on imaging from computed tomography (CT) scans taken prior to treatment to determine a tumor’s location. This works well if the tumor lies in an easily detectable and immobile location, but less so if the area is moving, as in the case of lung cancer.

At the University of Texas MD Anderson Cancer Center, scientists are tackling the problem of accurately attacking tumors using a new technology known as an MR-linac that combines magnetic resonance (MR) imaging with linear accelerators (linacs). Developed by Elekta in cooperation with UMC Utrecht and Philips, the MR-linac at MD Anderson is the first of its kind in the U.S.

MR-linacs can image a patient’s anatomy while the radiation beam is being delivered. This allows doctors to detect and visualize any anatomical changes in a patient during treatment. Unlike CT or other x-ray based imaging modalities, which provide additional ionizing radiation, MRI is harmless to healthy tissue.

The MR-linac method offers a potentially significant improvement over current image-guided cancer treatment technology. However, to ensure patients are treated safely, scientists must first correct for the influence of the MRI’s magnetic field on the measurements used to calibrate the radiation dose being delivered.

Researchers use software called Geant4 to simulate radiation within the detectors. Originally developed by CERN to simulate high energy particle physics experiments, the MD Anderson team has adapted Geant4 to incorporate magnetic fields into their computer dosimetry model.

“Since the ultimate aim of the MR-linac is to treat patients, it is important that our simulations be very accurate and that the results be very precise,” said Daniel O’Brien, a postdoctoral fellow in radiation physics at MD Anderson. “Geant4 was originally designed to study radiation at much higher energies than what is used to treat patients. We had to perform tests to make sure that we had the accuracy that we needed.”

Using the Lonestar supercomputer at the Texas Advanced Computing Center (TACC), the research team simulated nearly 17 billion particles of radiation per detector to get the precision that they needed for their study.

In August 2016, they published magnetic field correction factors in Medical Physics for six of the most-used ionization chamber detectors (gas-filled chambers that are used to ensure the dose delivered from a therapy unit is correct). They are now working on verifying these results experimentally.

“The MR-linac is a very promising technology but it also presents many unique challenges from a dosimetry point of view,” O’Brien said. “Over time, our understanding of these effects has improved considerably, but there is still work to be done and resources like TACC are an invaluable asset in making these new technologies safe and reliable.”

“Our computer simulations are important because their results will serve as the foundation to extend current national and international protocols to perform calibration of conventional linacs to MR-linacs,” said Gabriel Sawakuchi, assistant professor of Radiation Physics at MD Anderson. “However, it is important that our results be validated against measurements and independent simulations performed by other groups before used clinically.”

(The project was partially funded by Elekta, a Swedish company that provides radiation therapy equipment and clinical management for the treatment of cancer and brain disorders.)

Proton Therapy Planning

X-ray radiation is the most frequently used form of high-energy treatment, but a new treatment is emerging that uses a beam of protons to deliver energy directly to the tumor with minimal damage to surrounding tissues and without the side effects of x-ray therapy.

Like x-ray radiation, proton therapy blasts tumors with beams of particles. But whereas traditional radiation uses photons, or focused light beams, proton therapy uses ions — hydrogen atoms that have lost an electron.

Proton beams have a unique physical characteristic known as the ‘Bragg peak’ that allows the greatest part of its energy to be transferred to a specific area within the body, where it has maximum destructive effect. X-ray radiation, on the other hand, deposits energy and kills cells along the whole length of the beam. This can lead to unintended cell damage and even secondary cancer that can develop years later.

In comparison with current radiation procedures, proton therapy saves healthy tissue in front of and behind the tumor. Since the patient is irradiated from all directions and the intensity of beams can be well modulated, the method provides further reduction of adverse effects.

Proton therapy is particularly effective when irradiating tumors near sensitive organs — for instance near the neck, spine, brain or lungs — where stray beams can be particularly damaging.

Medical physicists and radiation oncologists from Mayo Clinic in Phoenix, Arizona in collaboration with MD Anderson researchers, recently published a series of papers describing improved planning and use of proton therapy.

Writing in Medical Physics in January 2017, they showed that in the three clinical cases included in this study, their chance-constrained model was better at sparing organs at risk than the current method. The model also provided a flexible tool for users to balance between plan robustness and plan quality and was found to be much faster than the commercial solution.

The research used the Stampede supercomputer at TACC to conduct computationally intensive studies of the hundreds of factors that go into maximizing the effectiveness of, and minimizing the risk and uncertainties involved in, these treatments.

Proton therapy was first developed in the 1950s and came into mainstream in the 1990s. There are currently 12 proton therapy centers nation-wide and the number is growing. However, the cost of the proton beam devices — $200 million dollars, or 30 to 50 times more expensive than a traditional x-ray system — means they are still rare. They are applied only in cases that require extra precision and doctors must maximize their benefit when they are used.

Mayo Clinic and MD Anderson operate the most advanced versions of these devices, which perform scanning beam proton therapy and are able to modulate the intensity of the beam. Wei Liu, one of the lead proton therapy researchers at Mayo Clinic, likens the process to 3-D printing, “painting the tumor layer by layer.” However, this is accomplished at a distance, through a protocol that must be planned in advance.

The specificity of the proton beam, which is its greatest advantage, means that it must be precisely calibrated and that discrepancies from the ideal must be considered. For instance, hospital staff situate patients on the operating surface of the device, and even placing a patient a few millimeters off-center can impact the success of the treatment.

Moreover, every patient’s body has a slightly different chemical composition, which can make the proton beam stop at a different position from what is intended. Even patients’ breathing can throw off the location of the beam placement.

“If a patient has a tumor close to the spinal cord and this level of uncertainty exists, then the proton beam can overdose and paralyze the patient,” Liu said.

The solution to these challenges is robust optimization, which uses mathematical techniques to generate a plan that can manage and mitigate the uncertainties and human errors that may arise.

“Each time, we try to mathematically generate a good plan,” he said. “There are many unknown variables. You can choose different beam angles or energy or intensity. There are 25,000 variables or more, so generating a plan that is robust to these mistakes and can still get the proper dose distribution to the tumor is a large-scale optimization problem.”

To solve these problems, Liu and his team use supercomputers at the Texas Advanced Computing Center.

“It’s very computationally expensive to generate a plan in a reasonable timeframe,” he continued. “Without a supercomputer, we can do nothing.”

Liu has been working on developing the proton beam planning protocols for many years. Leading commercial companies have adopted methods similar to those that Liu and his collaborators developed as the basis for their radiation planning solutions.

Recently, Liu and his collaborators extended their studies to include the uncertainties presented by breathing patients, which they call “4D robust optimization,” since it takes into account the time component and not just spatial orientation.

In the May 2016 issue of the International Journal of Radiation Oncology, they showed that compared to its 3D counterpart, 4D robust optimization for lung cancer treatment provided more robust target dose distribution and better target coverage, while still offering normal tissue protection.

“We’re trying to provide the patient with the most effective, most reliable, and most efficient proton therapy,” Liu said. “Because it’s so expensive, we have to do the best job to take advantage of this new technology.”

(Liu’s work is supported by grants from the National Institutes of Health’s National Cancer Institute and recently received support from the State of Arizona.)

Uncovering the Quantum Basis of Proton Cancer Therapy

Like many forms of cancer therapy, clinicians know that proton therapy works, but precisely how it works is a bit of a mystery.

The basic principle is not in question: proton ions collide with water molecules, which make up 70 percent of cells, triggering the release of electrons and free radicals that damage the DNA of cancerous cells. The proton ions also collide with the DNA directly, breaking bonds and crippling DNA’s ability to replicate.

Because of their high rate of division and reduced ability to repair damaged DNA, cancerous cells are much more vulnerable to DNA attacks than normal cells and are killed at a higher rate. Furthermore, a proton beam can be focused on a tumor area, thus causing maximum damage on cancerous cells and minimum damage on surrounding healthy cells.

However, beyond this general microscopic picture, the mechanics of the process have been hard to determine.

“As happens in cancer therapy, they know empirically that it works but they don’t know why,” said Jorge A. Morales, a professor of chemistry at Texas Tech University and a leading proponent of the computational analysis of proton therapy. “To do experiments with human subjects is dangerous, so the best way is through computer simulation.”

Morales has been running computer simulations of proton-cell chemical reactions using quantum dynamics models on TACC’s Stampede supercomputer to investigate the fundamentals of the process. Computational experiments can mimic the dynamics of the proton-cell interactions without causing damage to a patient and can reveal what happens when the proton beam and cells collide from start to finish, with atomic-level accuracy.

Quantum simulations are necessary because the electrons and atoms that are the basis for proton cancer therapy’s effectiveness do not behave according to the laws of classical physics. Rather they are guided by the laws quantum mechanics which involve probabilities of location, speed and reactions’ occurrences rather than to the precisely defined versions of those three variables.

Morales’ studies on Stampede, reported in PLOS One in March 2017, as well as in Molecular Physics, and Chemical Physics Letters (both 2014), have determined the basic byproducts of protons colliding with water within the cell, and with nucleotides and clusters of DNA bases — the basic units of DNA. The studies shed light on how the protons and their water radiolysis products damage DNA.

The results of Morales’ computational experiments match the limited data from physical chemistry experiments, leading to greater confidence in their ability to capture the quantum behavior in action.

Though fundamental in nature, the insights and data that Morales’ simulations produce help researchers understand proton cancer therapy at the microscale, and help modulate factors like dosage and beam direction.

“The results are all very promising and we’re excited to extend our research further,” Morales said. “These simulations will bring about a unique way to understand and control proton cancer therapy that, at a very low cost, will help to drastically improve the treatment of cancer patients without risking human subjects.”

6 thoughts on “Targeted, high-energy cancer treatments get a supercomputing boost

  1. Pingback: sa game login
  2. Pingback: 789bet
  3. Pingback: adult chat

Comments are closed.