🔒
❌
Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 18 septembre 2019Physics World

Two-dimensional MXenes improve perovskite solar cell efficiency

With the reality of climate change looming, the importance of realistic green energy sources is higher than ever. Solar cells are one promising avenue, as they can convert readily available visible and ultraviolent energy into usable electricity. In particular, perovskite materials sandwiched between other support layers have demonstrated impressive power conversion efficiencies. Current challenges reside in optimizing perovskite/support layer interfaces, which can directly impact power conversion and cell degradation. Researchers Antonio Agresti et al. under the direction of Aldo Di Carlo at the University of Rome Tor Vergata in Italy have investigated how cells containing two-dimensional titanium-carbide MXene support layers could improve perovskite solar cell performance.

Tuned MXenes as buffer layers

To obtain good power conversion within a perovskite solar cell, all layers and layer interfaces within the cell must have good compatibility. Typical cells contain the active perovskite material sandwiched between two charge transport layers, which are then adjacent to their corresponding electrodes. Support layers may also be added. Charge mobility, energy barriers, interface energy alignment, and interfacial vacancies all impact compatibility and subsequent cell performance and stability. Thus, engineering well-suited interfaces with the cell is paramount to cell success and long-term stability, an important criterion for potential commercialization.

Two-dimensional buffer materials could help to modify and promote useful interface interactions. MXenes, a growing class of two-dimensional transitional metal carbides, nitrides, and carbonitrides, have shown impressive electronic properties that are easily tuned via surface modification. For example, the band gap of an MXene can be modified by changing the surface termination group from an oxygen atom to a hydroxide molecule. Additionally, MXene composition impacts the overall material performance. This type of fine-tuning allows impressive control over MXene properties and makes them ideal for interface adjustments.

A complicated MXene/perovskite sandwich

Agresti et al. utilized titanium-carbide MXenes, Ti3C2Tx, where T can be oxygen, hydroxide, or fluoride, as buffer layers within perovskite solar cells. By inserting these MXenes in between active material layers in their functional perovskite solar cell, they demonstrated an impressive cell efficiency of 20.14%. The fill factor, a quantification of the maximum power generated by a solar cell, was 77.6%. The short circuit current density, or largest current to be drawn from the cell, was 23.82 mA/cm2. The maximum possible voltage, called the open circuit voltage, was 1.09 V. For comparison, commercially available silicon solar cells have a typical cell efficiency of 25%, a fill factor of about 83%, a short circuit current density of 43 mA/cm2, and an open circuit voltage of 0.706 V.

The crafted perovskite solar cell also demonstrated improved stability during irradiation, a criterial property for commercial cell development. While the reference perovskite cell devoid of MXene layers degraded within 10 minutes when exposed to 1 sun of irradiation, the complex perovskite/MXene sandwich cell retained 83% efficiency after 30 minutes of exposure. Agresti et al. state “The stabilization effect due to MXene can be related to the improved charge extinction, since trapped charges at the interfaces are known to trigger degradation.” Commercial stability standards requires less than 2-3% degradation in the first year, meaning additional research is necessary to address long-term perovskite cell stability even with MXene helper layers.

Modelled interfacial interactions yield insights

Density functional theory calculations supported these results, as calculated band profiles and Fermi levels matched well with those directly measured using ultraviolet photoelectron spectroscopy. Accounting for additional unexpected absorbance and an increased number of carriers within the calculations yielded excellent agreement and an appropriate model of MXene/perovskite interfacial interactions.

Using such a model and the supporting experimental results could help fine-tune future MXenes via surface modification to improve perovskite solar cell performance without mitigating or significantly altering other material properties.

Full details are reported in Nature Materials.

 

The post Two-dimensional MXenes improve perovskite solar cell efficiency appeared first on Physics World.

Turbomolecular pumps: diverse customers drive advanced performance

Mechanical turbomolecular pumps are nothing if not versatile. They’re a workhorse technology for analytical instrumentation OEMs – providing a core building block in mass spectrometry, electron microscopy, thin-film deposition systems and plenty more besides. They also enable diverse applications in frontline research – whether that’s in a “big science” particle physics facility or a university materials laboratory focused on nanoscale surface science and engineering. Either way, vacuum specialist Edwards reckons its nEXT family of mechanical turbomolecular pumps can only benefit from the vendor’s twin focus on these distinct – and very different – customer bases.

Over the past three decades, Edwards has shipped more than 320,000 turbomolecular pumps and pumping stations into a wide range of applications and markets – with significant sales growth over the past 10 years in particular. “We’re not only at the forefront of primary vacuum-pump technology, but we are also a technology and innovation leader in turbomolecular pumping,” claims Daniel Reinhard, manager, divisional product management, at Edwards.

“We have always addressed a good balance between the OEM instrumentation and scientific end-user markets,” he adds. “Because our pumps are used extensively by the analytical instrument OEMs, we get plenty of transferable advantages for the scientific end-user – including economies of scale, compact footprint, robustness and reliability.”

Daniel Reinhard
Daniel Reinhard: “We have always addressed a good balance between the OEM instrumentation and scientific end-user markets.” (Courtesy: Edwards)

Those scientific customers are very much front-and-centre for Reinhard, who highlights field serviceability of the nEXT pumps as a significant win in terms of research productivity. Specifically, the nEXT design is such that the customer can change the lower bearing in the pump themselves using a simple toolkit (plus instructions on a YouTube video). “It’s a straightforward process and takes around 10 minutes to change the oil reservoir and bearing,” claims Reinhard. “This ‘design for serviceability’ translates into a lower cost of ownership – because there’s no need to send the pump back to a service hub when the bearing needs replacing – and also minimizes experimental downtime for our research customers.”

Another notable innovation within the nEXT range is the integration of an infrared sensor into the pump to measure the temperature of the rotor directly, whereas previously this measurement was based on a best estimate. “How far you can push your pump comes down to that rotor and how hot it’s getting,” explains Reinhard. “In the past, pump operation had to be fairly conservative, reducing pump speed to avoid overheating and potential damage to the rotor. Now, because we no longer have to estimate shaft temperature, users can push the pump harder – which means a bigger performance envelope, while providing peace of mind when it comes to robustness and reliability.”

Listening to the customer

As for the bigger picture, product development of Edwards’ turbomolecular pumps and pumping stations is shared between the company’s Global Technology Centre (GTC) in Burgess Hill, UK, and three specialist product companies located in Lutin, Czech Republic; Cologne, Germany; and Yachiyo, Japan. Within a diversified and global R&D effort, the GTC employs a team of scientists and engineers dedicated to core technology development and validation across all of Edwards’ product lines, including nEXT pumps. Effectively the GTC is the engine-room of new product innovation at Edwards, with the focus squarely on next-generation enabling technologies and product platforms.

While part of the GTC’s remit is to address requirements coming in from the end-users, the centre spends at least half of its time working on longer-range blue-sky R&D. “Some of this will be driven by direct market needs and really listening to customer requirements,” says Reinhard, “while some of it will be driven by the GTC team thinking there’s a new technology opportunity.”

The Edwards product companies, meanwhile, concentrate on development and evolution of existing product lines. A typical example of their role is the recent addition of an onboard micro USB port to the nEXT pumps, allowing users to configure, control and monitor the pumps remotely from a personal computer.

“We have really strong links from the sales teams into the core business,” says Reinhard. “That market intelligence helps us to understand what the customers want and what they don’t want, allowing market sector managers and product managers to define the priorities for continuous improvement of our nEXT pumps for the product companies. Next year, for example, we will be launching more nEXT variants to extend the pumping speed ranges covered by this platform.”

Another significant player in the Edwards innovation ecosystem is its product company in Eastbourne, UK, which specializes in the design, development and manufacture of electronics for use across the Edwards product range. All the electronics in the nEXT pumps, for example, are designed and manufactured in-house at Eastbourne – including the TIC and TAG controllers that support the pumps, as well as the controller in the T-Station pump carts. Engineering staff from Eastbourne are also part of the development and introduction teams for the nEXT pumps, “ensuring an incredibly close bond between pump and electronics that’s only possible with a full in-house electronics design and manufacturing capability”, says Reinhard.

Product development at Edwards
nEXT generation: the Edwards Global Technology Centre (GTC) in Burgess Hill, UK, employs a team of scientists and engineers dedicated to core technology development and validation across all of Edwards’ product lines, including nEXT pumps. (Courtesy: Edwards)

“This means that nEXT pumps are optimized for this pairing, and we have full control and flexibility on our electronics,” he adds. “We can do variants for our analytical instrument customers very easily, something that is much harder if you purchase your electronics from a third party. Because we specialize in the electronics, we also can bring reliability and functionality benefits to our customers much faster and more easily than if the electronics were outsourced.”

Vacuum made easy

For many research scientists, of course, vacuum technology will always remain a means to an end – an essential enabler that works best when it does its job unnoticed and uninterrupted. A case in point is Felix Hofmann, whose group at the University of Oxford, UK, uses a range of experimental and analytical techniques to study the role that atomic-scale defects play in the mechanical, physical and failure properties of structural alloys.

“It’s usual to think of defects as something detrimental, something bad,” says Hofmann. “We’re interested in what sorts of defects are created under different conditions – whether mechanical deformation, chemical changes or irradiation – and also how we can control and tune those defects to deliver improved material functionality.”

For his part, Hofmann is typical of many scientific end-users of vacuum products. “What I want to do is think about vacuum as little as possible,” he explains. “Essentially I want a system that pulls the vacuum and is 100% reliable, 100% of the time – it’s that simple. We’ve got two of Edwards’ turbomolecular pumping stations in our lab for that reason.”

So what role does vacuum play in Hofmann’s science? One particular area of interest is thermal transport in very thin surface layers – and specifically the use of ion implantation to mimic the kinds of degradation that materials will undergo in future fusion reactors. The damaged layer thickness that this technique creates is only a few microns thick, which means that special laser techniques are needed to measure the material properties – in particular, thermal conductivity – within the very thin surface layer.

What’s more, those laser measurements need to be carried out in vacuum to avoid spurious signals from air next to the sample surface. “We’ve built a bespoke vacuum chamber that also gives us flexibility when it comes to inserting different types of sample environments,” says Hofmann. “For example, we’re currently in the process of building a heating stage; there will also be a deformation rig with the ability to do some electrical loading.”

Hardware aside, Hofmann says a key benefit of the relationship with Edwards is the active dialogue after the point of sale. “What’s been really important is having, in some sense, real flexibility in terms of the vacuum chamber configuration,” he explains. “It turns out the original vacuum system design we came up with was not optimal. But working together with Gavin [our sales engineer at Edwards] helped us to realize that we can separate the turbomolecular pump from the pumping station and attach it directly to the chamber, while still getting the push-button functionality that the pumping station offers.”

  • Edwards will feature the nEXT range of mechanical turbomolecular pumps on booth W10 at the Vacuum Expo in Coventry, UK (9-10 October 2019).

Pump it up: products in brief

The Edwards’ family of mechanical turbomolecular pumps and pumping stations comprises the following core product lines:

  • nEXT turbomolecular pumps are hybrid bearing pumps with a compound drag stage and integrated controllers for pumping speeds from 47 to 400 l/s. All nEXT pumps feature a permanent magnetic upper bearing, which eliminates hydrocarbons at the top of the rotor, and an oil-lubricated lower bearing for reliable high-speed operation. The on-board controller interfaces directly with Edwards’ TIC and TAG controllers to facilitate system integration.
  • The T-Station 85 is a compact turbomolecular pumping station combines an nEXT85H turbomolecular pump with either a dry-diaphragm or oil-sealed backing pump and a simple controller. Pumping speeds range from 47 to 84 l/s. The T-Station 85 comes with an integrated turbo and active gauge controller to enable single-button start/stop of the system and control of one active gauge in general laboratory applications.
  • nEXT turbomolecular pumping stations are configurable with turbomolecular pump speeds ranging from 47 to 400 l/s and a choice of oil-sealed or dry backing pumps ranging from 1 to 20 m3/h. All nEXT pumping stations feature an integrated TIC turbo and instrument controller, offering full system control (including up to three active gauges) via an intuitive user interface. The pumping stations are supplied ready to run straight out of the box and include RS232 serial communications and Windows software for monitoring and control.

The post Turbomolecular pumps: diverse customers drive advanced performance appeared first on Physics World.

Indian Ocean warming could be stabilizing Atlantic circulation, say scientists

Par : No Author
  • This story is part of Covering Climate Now, a global collaboration of more than 250 news outlets to strengthen coverage of the climate story. 

Warming in the tropical Indian Ocean could strengthen the Atlantic meridional overturning circulation (AMOC) system that transports warm water from the tropics into the North Atlantic, even though AMOC is expected to weaken or even halt as a result of climate change. That is the finding of a global climate simulation run by scientists in the US. They say that this is a previously unidentified link that needs to be investigated further and highlights the important role that the tropical Indian Ocean plays in global climate.

AMOC is a large system of ocean currents that transport warm water from the tropical Atlantic Ocean north towards the Arctic. The circulation is driven by differences in the water density, caused by differences in temperature and salinity. When the warm water reaches the North Atlantic it cools and its salinity increases. This now denser water then sinks and slowly moves southwards, back to the tropics, where it is warmed and the circulation continues.

The AMOC plays an important role in regulating the climate in the Europe, but recent research has suggested that it has been weakening since the mid-twentieth century. Models also indicate that this slowdown will continue as the climate warms, but there is uncertainty around the magnitude of the weakening.

Greater warming

The tropical Indian Ocean has also experienced changes linked to global warming since the mid-twentieth century, with its surface temperature increasing by more than 1 °C. This is around 50–60% greater than warming seen in other tropical basins. Due to the direct and indirect influence of the tropics on global climate, Shineng Hu, of the University of California-San Diego, and Alexey Fedorov, of Yale University, wondered if the warming of the Indian Ocean could impact global ocean circulation currents.

To test their prediction, the researchers ran a global climate model that simulated a uniform surface warming of 1 °C on the tropical Indian Ocean. This was found to strengthen the AMOC by around 30%. They also ran simulations of 2 °C of surface warming and 1 °C of surface cooling. These models suggest that the AMOC responses almost linearly to changes in Indian Ocean temperature, strengthening as it warms and weakening as it cools, they report in Nature Climate Change.

This does not mean, however, that the AMOC is getting stronger overall. Just that tropical Indian Ocean warming may have had a stabilizing effect, making the slowdown of the circulation system under climate change less pronounced, the researchers say.

Cascade of effects

Hu and Fedorov found that the mechanism behind the Indian Ocean’s impact on the AMOC involves a cascade of effects. As the tropical Indian Ocean warms, increased evaporation of its surface water leads to a rise in rainfall and an associated heat release. This influences local airflow and strengthens tropical atmospheric circulations.

As the surface ocean gets denser, the AMOC gets stronger

Shineng Hu

In the Atlantic Ocean these changes in atmospheric circulations strengthen cross-equatorial winds and lower the sea surface temperature, leading to atmospheric changes that reduce rainfall. This rainfall reduction increases surface water salinity (and hence density), and over time this more saline water is transported by the AMOC to the northern Atlantic. Once there, the denser water sinks faster than before, accelerating the deep-water convection currents. “As the surface ocean gets denser, the AMOC gets stronger,” Hu told Physics World.

Annalisa Cherchi, of the National Institute of Geophysics and Volcanology in Italy believes that the work is important because it highlights the significance of Indian Ocean warming. “It seems like it is more important in terms of climate change than was thought before,” she says. More modelling and observational work is need on the Indian Ocean, and we need to pay more attention to how it is simulated in models, she adds, “because it could be important for other processes outside the Indian Ocean”.

Hu says that it is hard to know if Indian Ocean warming will continue to stabilize the AMOC, explaining that it depends on the relative warming rates of Indian, Pacific and Atlantic Oceans. “Over the last few decades the Indian Ocean has been warming faster than the tropical Atlantic and Pacific,” he says. “That, we argue, is important for the AMOC, but if later on the Atlantic or Pacific Ocean can catch up with that warming rate then the Indian Ocean warming impact would be weaker.”

Cherchi says that while the results presented in the study seem robust, they are based on a single model, so need to be repeated with other models. Hu says that this is their next step, looking at “different climate models to see if this idea is robust across different models, and doing some diagnostic analysis to see how much tropical Indian Ocean warming has been contributing to the current AMOC changes over the past few decades”.

The post Indian Ocean warming could be stabilizing Atlantic circulation, say scientists appeared first on Physics World.

UK–EU power links vital

While politically the UK may soon be decoupled from the EU, it is busy building power grid links with mainland Europe. And some say it should build more, creating offshore wind hubs and network systems to help with green power balancing, exports and imports. For example, the planned 1.4 GW HVDC Viking inter-connector between Denmark and Lincolnshire will pass through or near some big offshore wind farms. And more are on the horizon.

National Grid has done comparative connection studies for the proposed 1.6 GW Eurolink to the Netherlands and 1.5 GW Nautilus link to Belgium (completion expected by around 2025 and 2027, respectively) with the proposed Sizewell C nuclear plant area in mind, but also local offshore wind projects. Some say the UK should be looking to build many tens of gigawatts of offshore wind in the mid-North Sea and also offshore connection hubs, like the artificial island proposed for Dogger Bank, almost 100 miles out from Hull.

Grid upgrade needed

Certainly, the resource is vast, and the study of European grid issues (PDF) produced by ENTSO-E, the European Network of Transmission System Operators for Electricity, sees the UK as one of the key hubs in the network. The ENTSO-E study of EU grid issues up to 2040 updates its 2016 “Ten Year Network Development Plan” (TYNPD). It says that there will be a need for increased transmission capacity in some places, both internally and to other countries, to make the system work in 2040. This is largely due to the increasing levels and use of renewables to supply all areas of the European grid; the report says that up to 75% of the total demand of renewable energy will be reached by 2040, so that “European countries will more than ever need to rely on each other through cross-border exchanges”.

Physical connector links with the EU energy system could mean having to comply with internal EU energy market rules

Dave Elliott

However, interestingly, it suggests that there will be different balances in net supply across the EU. From ENTSO-E’s studies, it looks like NW wind, offshore especially, dominates, with a lot of surplus power shifted east at times. Much of that will presumably be from the North Sea. But it will be variable, so there will be technical, regulatory and market challenges to ensure stability, with increased system flexibility. And a need for new grids.

ENTSO-E says that, overall, the benefits of the expanded network far outweigh the necessary efforts that will need to be mobilized for its realization: “A lack of new investments by 2040 would hinder the development of the integrated energy market and would lead to a lack of competitiveness.In turn, this would increase prices on electricity markets leading to higher bills for consumers. By 2040, the ‘No Grid’ extra bill (€43 billion a year in the average case) would be largely above the expected cost of the new grid (€150 bn in total in the TYNDP 2016 plus internal reinforcements, 25% discount rate)”. A lack of investments would also affect the stability of the overall grid and could, in some regions, “threaten the continued access to electricity which also has a cost for society”. And finally, in all the scenarios the organization looked at, “without grid extension, Europe will not meet its climate targets”.

UK benefits

The UK has to be part of this, if only for parochial reasons. It will have a lot of surplus renewable power to export at times, as the renewable capacity builds up to 40, 50 and 60 GW, more than enough much of the time to meet the country’s needs (summer night-time demand is around 20 GW, peak winter demand under 60 GW).  At times though, when UK renewable availability is low and demand high, it may need some top-ups via the grid interconnectors. That said, the exports are likely to dominate, so the UK would be a net earner of substantial income, assuming the surplus can be sold at reasonable prices. That would help offset the cost of building up renewables, and the links can also clearly help with balancing. As the climate policy think tank E3G said earlier this year, the UK government must continue to work closely with the EU to develop cross-border power grid interconnections after Brexit, if it is to ensure the lowest-cost decarbonization pathway. More linking of the UK to EU power grids could help boost its energy security and flexibility as renewables grow.

Last year, interconnectors provided 6% of UK power supply, via the four existing links, making the UK a net importer of power across these links. However, as I noted in my last post, its wind potential is very large, so that pattern should change as more renewables are installed in the UK. Indeed it already has, with the UK being a net exporter to France for much of this year. The government currently plans to have at least 9 GW more grid link capacity. However, the ability to trade profitably depends on many factors — not just the availability of capacity and grid links, but also demand patterns, prices, the regulatory framework and wider policy context. The E3G paper warned that leaving the EU will “severely reduce” the UK’s ability to influence EU energy policy in line with its interests, which may make reaping the full benefits of greater inter-links harder. It could render the UK a rule-taker from the EU in some respects, as physical connector links with the EU energy system could mean having to comply with internal EU energy market rules, such as those covering energy, environment, state aid and competition. Sounds like a familiar issue…

The UK has some of the key resources needed for the emerging Europe-wide grid system (including its vast offshore wind resource) and the power engineering and marine technology expertise (including for offshore wind and undersea links). It may not like the EU single power market any more, but it may nevertheless need to get into it. At least that is the logic of the energy system. Political logic may be different, although it is perhaps worrying that, reportedly, Ireland is looking to a new 500 mile under-sea HVDC power link to France, and the EU market, by-passing the UK, with some funding from the EU.  It may also be worrying that, post-Brexit,  the UK will presumably miss out on EU funding for grid development – like the €800 m available  under the Connecting Europe programme for interconnectors. That’s supporting some of the already-planned and agreed UK links, but the UK may not be eligible for more after Brexit. So it’s all a bit uncertain and a bit of a mess, whereas the need for links is getting ever clearer and UK green power capacity is building up — offering an export potential.

The post UK–EU power links vital appeared first on Physics World.

Magnetic threads slide through blood vessels to reach clots in the brain

A team of researchers from Massachusetts Institute of Technology has designed a new surgical tool that is manoeuvrable through some of the narrowest twisting networks of blood vessels to help treat stroke and aneurysm. Using hydrogels and magnetic materials, they have created a magnetically steerable guidewire that can slide easily through blood vessels to reach blood clots in the brain (Science Robotics 10.1126/scirobotics.aax7329).

It is vital to treat stroke as quickly as possible to prevent potentially lethal damage – outcomes are better for patients who are treated within the first hour after the stroke, known as the “Golden Hour”. One method used to reduce clots is an endovascular procedure in which a guidewire inserted in a leg or groin is manipulated through the body to the blood vessel in the brain where the blockage is located.

Endovascular surgery is technically difficult, and the procedure requires a specially trained surgeon. A traditional guidewire can also be tricky to manoeuvre through tight spots and can create friction and further damage vessels.

To address the difficulties of control and friction, the team combined their knowledge of hydrogels and magnetic materials to design the new guidewire to be externally steerable and less damaging to blood vessels.

Down to the wire

The wire core is made from nitinol, a nickel titanium alloy. It is bendy and springy – allowing for more flexibility within the complex maze of brain blood vessels. The researchers coated the core with a paste containing magnetic particles to make it steerable. This allows the wire to be operated remotely.

The team also coated the wire in a hydrogel, to make it pass through blood vessels more easily. Hydrogels are formed from biocompatible polymers that can hold a large amount of water and are particularly smooth. Coating the wire with the hydrogel reduced friction on the walls of the blood vessel, making it easy to manipulate, even in tight spots.

To test the wire, the researchers passed it through a life-sized silicone replica of blood vessels in the brain. They filled the replica with a blood-like fluid and, using a large magnet, successfully manipulated the wire through the complex model.

The guidewire can also be functionalized to deliver clot-reducing drugs or break up blockages using lasers. For the latter, the team was able to replace the nitinol core of their previous model with an optical fibre designed to transmit laser light. They could then remotely activate the laser once it reached the blockage.

Looking to the future

This system was operated by manually moving the magnets to better control its location whilst being able to see the wire. However, in the future, the team hopes to manipulate the magnets with a more precise control system whilst visualizing the wire with a fluoroscope to replicate surgical conditions.

“Existing platforms could apply magnetic field and perform the fluoroscopy procedure at the same time to the patient, and the doctor could be in the other room, or even in a different city, controlling the magnetic field with a joystick,” lead author Yoonho Kim says. “In the next step, our hope is to leverage existing technologies to test our robotic thread in vivo.”

The post Magnetic threads slide through blood vessels to reach clots in the brain appeared first on Physics World.

How do we manage the retreat of communities hit by climate change?

Imagine you’re a passenger on an overloaded boat in a storm. You and every experienced person aboard know the ship is swamping. But the captain and crew are incompetent. Bent on the course they think will most profit themselves, they tell you the ship is great. Many passengers find this a relief, thrilled they don’t have to change how they act.

Unfortunately, you and other concerned passengers cannot convince the despicable captain and crew to take action. Had they acted earlier, both ship and cargo could have been saved; now it’s possibly too late. Part of the cargo will have to be heaved overboard to rescue the ship. But how should you choose how much cargo and which bit of it? And what will be the impact on the ship and passengers?

This was the basic situation tackled at a conference held last June at Columbia University in New York, entitled At What Point Managed Retreat? Resilience Building in the Coastal Zone. The storm is global warming; the swamping is its drastic effects; and heaving the cargo is what must be done to preserve anything like the life we now have.

Down, down

The conference was staged by the Earth Institute – a centre at Columbia that takes an interdisciplinary approach to complex looming issues facing the planet and its inhabitants. It attracted an overflow crowd of about 400 people, plus a further 300 who tuned in live online. Participants included social scientists, administrators, educators, activists, elders, journalists, lawyers, philosophers and representatives of non-governmental organizations.

Alex Halliday, the British geochemist who is the institute’s director, conveyed the urgency in his opening remarks. Sea levels are now rising by about 3–4 mm a year, he said, which doesn’t sound like much, but the rate could increase 10-fold. That will hit coastal cities and force hundreds of millions of people to be relocated and destroy trillions of dollars in property and infrastructure. Cascading effects associated with other climate developments will trigger monsoons, river floods, hurricanes, melting glaciers and more. “Sorry to put a damper on the beginning of the meeting,” Halliday said.

Robin Bronen, a lawyer and director of the Alaska Institute for Justice, a non-profit human-rights organization, was next. “I am heartbroken,” she said. “I can barely articulate the level and rapidity of change that I am bearing witness to.” The Arctic is, after all, in the front lines of climate change. In recent winters, temperatures there have risen by 3.5–4 °C, while last March they were 11 °C above the norm – far more than climate models had predicted.

I can barely articulate the level and rapidity of change that I am bearing witness to.

Robin Bronen

“That is rapidly changing the snow and ice, which…the indigenous communities that we work with rely on for their hunting and gathering food,” Bronen said, adding that it forces the communities to plan relocations. “What’s causing these changes is my and your greenhouse-gas emissions. If we do not radically cut those greenhouse-gas emissions, we are condemning millions of people to an uncertain future.”

The message conveyed by the 133 speakers who followed was no more upbeat. Though mainly from the US, they were from Australia, Bangladesh, Denmark, Ghana, Oman and Slovakia and half a dozen other countries too. Presentations included  a documentary about the impact of Hurricane Sandy on New York City neighbourhoods. A New Hampshire theatre troupe staged a mock discussion between a scientist, homeowner and elected community official in which the audience took part.

Just transition

Few papers addressed the scientific nuts and bolts of coastal flooding. Instead, most talks – and most hallway discussions – concerned social issues. Who should decide which coastal areas are to be relocated? How will decisions be made about when and where to relocate communities, or what support networks will have to be developed and maintained for them? And what about the delicate interactions between the decision-makers and the communities to be relocated? Even these days, when new buildings and roads are built, such interactions can be hugely contentious; the tensions and stakes when it comes to coastal relocations will be much higher, with the interactions taking place on a global scale.

How can we ensure that the most vulnerable communities who will have the least information and poorest infrastructure don’t end up treated the worst?

Robert P Crease

Social-justice issues were also debated. Many waterfront communities on the US eastern seaboard and along the Gulf of Mexico historically were stolen from Native American tribes, some of which have already been relocated several times against their will. How can we ensure that the most vulnerable communities who will have the least information and poorest infrastructure don’t end up treated the worst?

Many threatened regions are now occupied by wealthy landowners whose properties are products of over-development and greed. How can we ensure that the more economically valuable real estate of the top 1%, who after all have by far the most political clout, do not once again receive preferential treatment? How, in other words, can we ensure “just transition”, as one speaker put it?

The critical point

At What Point Managed Retreat? was the first major conference to discuss not the looming danger of climate change, but how to cope with its unfolding. By the end of the three-day meeting, few clear solutions had emerged, though it was an important step forward to recognize social justice as an essential component of any solution.

It was affirming to see that you’re not crazy and that other people are worrying about this.

Radley Horton

Some participants were left fearful and depressed, but conference co-organizer Radley Horton told me afterwards that many felt relief and even optimism. “It was affirming to see that you’re not crazy and that other people are worrying about this,” he said to me. “It’s still early days. But it’s encouraging to see people beginning to discuss, proactively, what safer and less vulnerable places might look like.”

At least some passengers see the need to develop a plan for dealing with the vile and immoral actions of the captain and crew.

The post How do we manage the retreat of communities hit by climate change? appeared first on Physics World.

À partir d’avant-hierPhysics World

Regional climate shapes river topography

Par : No Author
  • This story is part of Covering Climate Now, a global collaboration of more than 250 news outlets to strengthen coverage of the climate story. 

Strong connections between regional climate and the topography of rivers have been identified by researchers in the UK. Shiuan-An Chen and Katerina Michaelides at the University of Bristol and colleagues at Queen Mary University of London and Cardiff University discovered the relationship by combining an extensive study of satellite data with numerical modelling. Their analysis could offer important insights into how landscapes could evolve in the future, as climate change brings widespread changes to regional humidity levels.

For some time, geoscientists have understood that the topography of Earth’s land surface is strongly tied to climate through the processes of rainfall, runoff and erosion. The evolution of rivers over time is an important example of this effect.

The topography of a river can be characterized in terms of its longitudinal profile, which is plotted in terms of elevation versus downstream distance. Such profiles reveal two main ways that rivers can make their descent. One common profile resembles a linear ramp with the river falling in a straight line, while the other looks like a concave surface that is steep at the top and flattens out towards the bottom.

However, researchers have struggled to draw links between the longitudinal profiles and regional climates of rivers. One major difficulty is a lack of data on rivers in drylands, which comprise around 40% of the Earth’s land surface.

Global study

In their study, the team aimed to learn more about the connection between climate and profile using data gathered during NASA’s Space Shuttle program. These images contained the longitudinal profiles of over 330,000 rivers spanning the globe, allowing the researchers to explore how longitudinal profile concavity is affected by climate zones of all types.

The team’s analysis clearly shows that longitudinal profiles are more concave in humid environments and become more ramp-like as aridity increases. To explain this trend, they then employed a simple numerical model that accounted for effects including stream flow and erosion. It showed that the shapes of longitudinal profiles strongly depend on the rate of change of stream flow as downstream distance increases.

In humid environments, which have significant rainfall and runoff throughout the year, rivers tend to flow constantly throughout their length, and so stream flow increases with downstream distance. This means that riverbed sediments are continually transported downstream, thereby carving out concave shapes over time. In arid regions, rivers only flow sporadically in localized regions. As a result, sediment transport is far less frequent, and longitudinal profiles remain straighter.

The team believes that with improvements in computing power, the techniques could also offer important insights into how the topography of drainage basins could be altered by climate change.

The research is described in Nature.

The post Regional climate shapes river topography appeared first on Physics World.

ASTRO: highlighting radiation oncology breakthroughs

The ASTRO Annual Meeting takes place this week in Chicago. The world’s largest scientific meeting on radiation oncology, the event is predicted to attract over 10,000 attendees, including oncologists, medical physicists, dosimetrists, radiation therapists and other healthcare professionals from around the globe. Here is a small selection of some of the top-rated abstracts highlighted at this year’s meeting.

Cardiac radioablation tackles high-risk arrhythmias

Ventricular tachycardia (VT), the rapid onset of rapid, abnormal heartbeats, is the most lethal heart rhythm disorder. If not treated immediately with defibrillation to shock the heart back into a normal rhythm, VT can be fatal. A new treatment – EP-guided non-invasive cardiac radioablation (ENCORE) – uses a single, high dose of radiation to dramatically reduce VT episodes in high-risk heart patients.

“The results are very promising,” says lead author Clifford Robinson from Washington University School of Medicine in St. Louis. “The use of non-invasive radiation therapy is providing new hope for patients with life-threatening ventricular arrhythmias and limited treatment options.”

Patients at risk for VT are usually given an implantable cardioverter defibrillator (ICD). While shocks from an ICD can be life-saving, they are painful and can result in poor quality-of-life. Patients with repeated VT often receive catheter ablation, an invasive and risky procedure that requires general anaesthesia, and only has a 50% chance of stopping arrhythmias from recurring.

The non-invasive ENCORE procedure fuses electrical (electrocardiogram) and imaging (CT, MRI and PET) data to pinpoint the scar tissue in the heart responsible for the arrhythmias. This region is then targeted with a single dose of stereotactic body radiotherapy (SBRT), with no general anaesthesia needed.

In a phase I/II prospective trial, Robinson and his team treated 19 patients with life-threatening cardiac arrhythmia using a single 25 Gy fraction of SBRT. ENCORE reduced VT episodes by 94% in the first six months. Longer-term follow-up revealed that in 78% of patients, this reduction persisted for more than two years after treatment. Overall survival was 74% after one year and 52% after two.

Serious toxicity was low, but three serious adverse events were observed more than two years after treatment. This is not surprising, explains Robinson, as the patients were often being treated as a last line of defence because they were too sick to undergo further catheter ablation.

An additional benefit of ENCORE, Robinson notes, was the reduction in required medication. “These patients were on heavy doses of medications, with side effects such as liver damage, lung damage, nausea and thyroid problems,” he says. “After they were treated, we saw reduced VT, reduced medication and improved quality-of-life, at least in the intermediate term.”

Radiotherapy can reinvigorate the immune system

Non-small-cell lung cancer (NSCLC) is often diagnosed at a late stage when tumours have already spread, making it difficult to cure. Now, researchers from Yale School of Medicine have shown that delivering SBRT after patients no longer respond to immunotherapy reinvigorates the immune system in some patients with metastatic NSCLC, increasing progression-free survival.

Allison Campbell
Allison Campbell, a resident in the department of therapeutic radiology at Yale Cancer Center, speaking at the ASTRO Annual Meeting. (Courtesy: ASTRO)

“This study provides one more important piece of data that indicates that, for some patients, the immune system can be a really powerful tool to combat metastatic lung cancer,” explains lead author Allison Campbell from Yale Cancer Center. “It points us in the direction of places to look for biomarkers that might predict which patients would best respond to this type of therapy.”

In the phase II prospective trial, Campbell and her team treated a single cancerous lesion with SBRT in NSCLC patients whose cancer had continued to spread after immunotherapy. They studied 56 patients with two or more tumours, six of whom had already received immunotherapy and immediately underwent SBRT. The other 50 began immunotherapy with pembrolizumab at the start of the trial. Of these, 16 experienced disease progression, at which point they were treated with SBRT.

A total of 21 patients completed both treatments and lived an average five months longer without disease progression. In two patients, tumours outside the treated area shrank by 30% or more (attributed to the abscopal effect) and stayed that way for more than a year. Ten patients experienced disease stabilization following SBRT.

Analysing patients’ peripheral blood cells suggested that T cells played an important role in the immune system response. “We found two things that correlated with patients living longer without their disease progressing,” says Campbell. “Those were T cells infiltrating the tumour before immunotherapy, and the presence of immune-related side effects during the course of treatment, such as inflammation of the lung or gastrointestinal tract.”

In patients who responded well to the combination therapy, the researchers saw a population of CD8 T cells that looked more excited, while in poor responders, they saw a population of CD4 T cells with inhibitory markers. “The bigger picture here is that there are signatures in the peripheral blood that are promising avenues for future identification of people who will respond well to SBRT combined with immunotherapy,” Campbell notes.

The next step will be to validate the findings in a larger population, such as a phase III randomized trial.

Machine learning model predicts irradiation side effects

Radiotherapy plays an integral role in the management of head-and-neck cancers, but can also cause adverse side effects such as sore throat, mouth sores, loss of taste and dry mouth. Severe sore throats can make it difficult for the patient to eat and may lead to weight loss or require temporary insertion of a feeding tube.

For the first time, a machine learning model has accurately predicted two major toxicities associated with head-and-neck radiotherapy: significant weight loss and the need for feeding tube placement. Being able to identify which patients are at greatest risk would allow radiation oncologists to take steps to prevent or mitigate possible side effects.

Jay Reddy
Jay Reddy. (Courtesy: ASTRO)

“In the past, it has been hard to predict which patients might experience these side effects,” explains lead author Jay Reddy from The University of Texas MD Anderson Cancer Center. “Now we have a reliable machine learning model, using a high volume of internal institutional data, that allows us to do so.”

Reddy and his team developed models to analyse large sets of data, merged from electronic health records, an internal web-based charting tool and the Mosaiq record/verify system. The data included more than 700 clinical and treatment variables for head-and-neck cancer patients who received more than 2000 courses of radiation therapy from 2016 to 2018.

The researchers used the models to predict three endpoints: significant weight loss, feeding tube placement and unplanned hospitalizations. They then validated the results from the best-performing model against 225 subsequent consecutive radiation treatments.

The models predicted the likelihood of significant weight loss and need for feeding tube placement with a high degree of accuracy. They could not, however, predict unplanned hospitalizations with sufficient clinical validity. Reddy notes that adding more training data could improve the accuracy. “As we treat more and more patients, the sample size gets bigger, so every data point should get better. It’s possible we just didn’t have enough information accumulated for this aspect of the model,” he explains.

While the machine learning approach can’t isolate the factors that lead to negative side effects, it can help patients and clinicians understand what to expect during the course of treatment. Machine learning models could also potentially predict which treatment plans would be most effective for different types of patients and enable more personalized approaches to radiation oncology.

“Machine learning can make doctors more efficient and treatment safer by reducing the risk of error,” says Reddy. “It has the potential for influencing all aspects of radiation oncology today – anything where a computer can look at data and recognize a pattern.”

The post ASTRO: highlighting radiation oncology breakthroughs appeared first on Physics World.

Black hole is hairless, reveals analysis of gravitational waves

The no-hair theorem, which says that black holes only have three defining properties, has been tested in a new analysis of the first-ever gravitational waves to be detected.

Maximiliano Isi at the Massachusetts Institute and colleagues in New York and California looked at the “ringdown” signal from the GW150914 merger of two black holes and have shown that it is consistent with the theorem.

The no-hair theorem is the statement that a black hole is characterized by only three observable properties – its mass, angular momentum and electrical charge. “No hair” refers to the resemblance of a black hole to a bald head with few defining features. While the theorem has no rigorous mathematical proof, it is in line with general relativity and therefore widely accepted.

Information paradox

The theorem is also at the centre of an important paradox of modern physics regarding whether information is destroyed when something is sucked into a black hole. The no-hair theorem suggests that information must be destroyed, whereas quantum theory says otherwise. As a result, understanding whether the no-hair theorem is correct has important implications beyond black holes.

This latest test of the theorem uses data from September 2015, when the LIGO gravitational-wave detectors observed a signal from two black holes orbiting each other in a binary system some 1.3 billion light-years away. Astronomers watched as the objects got closer and closer together until they coalesced into a single black hole with a mass of about 62 Suns.

Ring tones

At first, the resulting black hole is distorted and undergoes a rapid relaxation over a few milliseconds to a more symmetrical state. The distorted black hole has a natural set of oscillatory modes – much like the tones of a bell – and the relaxation involves the emission of gravitational waves at the frequencies of these modes in a process called ringdown.

The exact nature of the ringdown process is defined by the physical properties of the black hole – and therefore the frequencies should be consistent with those predicted by general relativity and the no-hair theorem.

Isi and colleagues found that the ringdown signal can be described in terms of the fundamental mode of the black hole plus at least one overtone. This, they say, is consistent with the hypothesis that the GW150914 merger created a “Kerr black hole” – which is a black hole with zero electrical charge. Furthermore, the analysis is consistent with a black hole that is characterized by the no-hair theorem

The LIGO (and Virgo) gravitational detectors have been upgraded since the detection of GW150914 and ringdown signals taken at improved detector sensitivities should provide a better indication of the validity of the no-hair theorem. Ringdown data could also help astronomers identify exotic astrophysical objects that could mimic black holes.

The research is described in Physical Review Letters.

 

The post Black hole is hairless, reveals analysis of gravitational waves appeared first on Physics World.

Optical lace could make a ‘nervous system’ for robots

A new sensor containing optical fibres embedded in a 3D-printed elastomer could make for a sensor network that allows robots to feel touch and sense how they interact with their environment. The optical lace, as it has been dubbed, could be distributed throughout the body of a robot and is similar to a biological nervous system as well as being stretchable. It can localize applied deformations with sub-millimetre positional accuracy and sub-Newton force resolution (0.3 N).

In the biological world, animals with poor vision have evolved other forms of perception, such as touch, to navigate their environment thanks to complex networks of nerves distributed throughout their bodies. Although researchers have succeeded in making artificial skin with tactile sensing for robots, wiring nerve-like networks throughout the body of a robot has proved more difficult.

A team of mechanical engineers led by Patricia Xu and Rob Shepherd of Cornell University in the US has now made an optical lace comprising optical fibres that host more than a dozen mechanosensors embedded in a 3D-printed elastomer (polyurethane) attached to a light-emitting diode that could overcome this problem.

Optical guides detect level of deformation

When the lattice structure is pressed, the optical guides detect the level of deformation (buckling and bending) experienced by the struts in the 3D lattice by measuring the intensity and location of light loss in the optical fibres through coupling. The intensity of the coupled light determines the intensity of the deformation itself, explains Xu.

The researchers say that the optical lace can be distributed throughout the body of the robot and not just coated on its surface. It allows the robot to be both exteroceptive – that is, sensitive to touch, so that it can detect where it is pressed – and proprioceptive so that it can measure the level of its own compression and “be aware” of its own body.

The optical lace is similar to a biological nervous system in which individual mechanoreceptors are embedded in the skin and muscle at different locations, explains Xu.

“In animals, these sensors send information about the size and location of deformation to the brain for processing. In the same way, our optical lace has distributed sensors throughout the structure that report the magnitude and position of deformations to a computer. The location is encoded in the position of the sensor and the intensity of light coupled encodes the magnitude of deformation.”

Safer interactions with people

Our optical sensors are more stable compared to many other stretchable electronic sensors, she tells Physics World. “In robots, if the sensor is placed closer to the surface in the right orientation, we can measure externally caused deformation (exteroception) and if it is placed deep inside the structure, it measures internal deformation (proprioception).”

Integrating these sensor networks into robots could allow them to more safely interact with people, adds Shepherd. “We are hopeful that, eventually, systems like these will allow robots to assist the elderly and people with reduced abilities. In such applications, the robot would need to know its own shape in order to hold and assist a person without hurting them.”

“Softer than cold, hard cyborgs”

Such robots, which would be softer than the cold, hard cyborgs we are used to seeing in science fiction films, could also be used in manufacturing, he adds. “If they can feel what they’re touching, then that will improve their accuracy.”

In their work, which is detailed in Science Robotics 10.1126/scirobotics.aaw6304 and supported by the Air Force Office of Scientific Research and the Office of Naval Research, the researchers employed physical models to translate sensor signals into deformation states. “In the future we would like to make larger networks and produce more complex deformations, so our current physical models will not work as well,” says Xu. “Machine learning could come in useful here to create these more sophisticated models and detect distortions, like bending and twisting.”

The post Optical lace could make a ‘nervous system’ for robots appeared first on Physics World.

Striving towards a fusion future

“I’ve been thinking about fusion since I was about eight years old,” says David Homfray, Head of Engineering Realisation at the UK Atomic Energy Authority (UKAEA). “It has always fascinated me what we could do if we could harness the power of the Sun and the stars.”

Homfray was recruited by UKAEA in 2002. He originally applied for a position as a mechanical engineer, a role he admits he was “entirely unsuited for”, and didn’t get the job. But his interviewers were so impressed by his energy and enthusiasm that they offered him a role as a physicist instead.

Today, some 17 years later, Homfray is at the bleeding edge of fusion research, a technology that promises to deliver sustainable electricity without harmful emissions. He is now an Engineer in Charge of the Joint European Torus (JET), currently the world’s most powerful fusion machine, and he also leads a team that is maturing the technologies needed to build a working fusion power plant.

“This is without doubt the most exciting time in the 20 years I’ve been here,” says Homfray. “If you’d have asked me even three years ago whether we could deliver fusion power in my lifetime, I would have given you some nice diplomatic answer. Now, in my opinion, I think we will see it in my career.”

Doughnut or apple?

Homfray’s optimism is well founded. An international consortium is currently building the most ambitious fusion experiment to date in rural southern France. ITER will ultimately produce 10 times more energy than is needed to heat its fusion fuel – generating 500 MW of power for 20 minutes using only 50 MW of input power – and one of its core objectives is to prepare the ground for the first large-scale fusion power plants.

Photograph of the doughnut-shaped vessel of the Joint European Torus
Inside the Joint European Torus (Courtesy: European Consortium for the Development of Fusion Energy)

Since ITER is essentially a scaled-up version of JET’s toroidal tokamak design, the experience that UKAEA has gained with JET has made it a critical partner in the ITER project. JET is providing both a testbed for new ITER technologies and a training ground for the next generation of fusion professionals.

Alongside its central role in the development of ITER, Homfray is enthused that UKAEA is also rapidly expanding its world-class capabilities across a broad range disciplines that will be crucial to realizing fusion power as fast as possible. This includes several major new facilities, such as Remote Applications in Challenging Environments (RACE), which is developing robotic maintenance techniques for reactors; the Materials Research Facility for processing and analysing radioactive samples; the Fusion Technology Facilities for testing components in the extreme conditions inside a fusion machine; and the Hydrogen-3 Advanced Technology (H3AT) centre for tritium science – a key fuel for fusion reactions.

A particularly exciting new development is a major upgrade to the Mega Amp Spherical Tokamak (MAST), a UK facility that represents a different approach to fusion power. MAST exploits a spherical design – like a cored apple, rather than the ring doughnut shape of JET and ITER – that was pioneered by the UKAEA in the late 1990s. The compact geometry of the spherical tokamak requires a lower magnetic field, which is less expensive to produce and maintain.

The upgrade to MAST-U, enabled by funding from the UK’s Engineering and Physical Sciences Research Council, will allow scientists to study long pulse-length plasmas that are closer to the steady-state conditions that will be needed for commercial fusion power plants. “It’s an incredible opportunity for the country to really drive forward the development of a technology the world is crying out for, and in which we are already a global leader,” Homfray adds.

Expanding workforce

With so many new facilities coming online, UKAEA is well equipped to explore a wide variety of promising fusion research avenues. But to make the most of these capabilities, the organization must expand its workforce too. UKAEA needs new recruits, and not just nuclear and plasma physicists. “We’re bringing in people with all types of skills,” says Heather Lewtas, UKAEA’s Head of Manufacturing Realisation. “We’re recruiting chemists, mechanical engineers, physicists, material scientists, biologists, as well as data scientists, AI researchers, roboticists, project managers, business development, HR … you name it, we need them.”

Those joining UKAEA will be contributing to a diverse workforce, which ranges from seasoned nuclear professionals to those just beginning their careers. For the latter, there are certified apprenticeship and graduate schemes in a host of different areas. And all new recruits can take advantage of many exciting continuous professional development schemes, including MSc and PhD fellowships.

Moreover, the collaborative atmosphere at UKAEA allows ideas and results to be shared with colleagues and with the international fusion community. This not only makes UKAEA “an incredibly friendly place to be”, but also accelerates learning and development.

This is without doubt the most exciting time in the 20 years I’ve been here

David Homfray

Lewtas joined UKAEA in December 2016. She is a prime example of how new recruits can develop their skills rapidly and find themselves working on important projects. Though she had a PhD in experimental physics from the University of Oxford, as well as postgrad and industrial experience, like many UKAEA staff she had “no background in fusion, no background in nuclear”. Luckily for her, UKAEA’s excellent formal and informal training, including a mentoring scheme and management development programme, enabled her to rapidly get up to speed.

As a result, just a year into her role at UKAEA she was tasked with leading a project called Joining and Advanced Manufacturing (JAM), which aims to find innovative manufacturing and testing solutions for a fusion power plant by forging collaborations with universities, the UK’s High Value Manufacturing Catapult centres, as well as SMEs and industry. “I enjoy making links between different areas of science and engineering, or between different sectors,” she says. “I absolutely love the fact that I’ve got the opportunity to do that and to make a real difference in progressing fusion in the process.”

Lewtas could not have achieved so much success without a dynamic, energized team behind her. JAM team members have backgrounds from a range of sectors and spanning all levels of experience. Who knows? Her next team member could even be you. “People shouldn’t write themselves off because they think they won’t fit into an organization like UKAEA,” Lewtas says. “Many, many different skillsets can contribute to trying to realize fusion.”

The post Striving towards a fusion future appeared first on Physics World.

Work travel doubles researchers’ carbon footprint

Par : No Author
  • This story is part of Covering Climate Now, a global collaboration of more than 250 news outlets to strengthen coverage of the climate story. 

Each professor at the University of Montreal in Canada travels on average more than 33,000 km a year, according to a recent study. That makes their carbon footprint nearly double that of most Canadians.

The travel – 90% by plane – is equivalent to three trans-Atlantic trips from Montreal to Paris and back per person each year. The amount of travel varies greatly between professors, however, with some scarcely leaving the university walls while others clock up some 175,000 km, the study found.

“Flying around the world comes at a high environmental price,” says Julien Arsenault of the University of Montreal. “It is very easy to be hyper-mobile, but we should really think hard about how much of this travelling is actually necessary or even beneficial to science.”

Together with co-authors at the University of Montreal and McGill University, both in Canada, Arsenault decided to assess the travel habits of his colleagues as he was involved in a cross-institutional project to reduce the environmental footprint of universities. Although the University of Montreal could supply him and his co-authors with figures for its own environmental footprint – drawn, for example, from its energy usage and food sold – there was no information available on long-distance academic travel.

To generate this information themselves, Arsenault and his co-authors sent a survey to faculty members, research staff and graduate students. The survey set out to establish not just the carbon footprint of academics and students, but also their nitrogen footprint – a metric, gaining in usage, that reflects how much nitrogen a person is responsible for releasing into the environment from crop fertilization, fossil-fuel combustion and other processes. Nitrogen has a range of harmful effects on the environment, including smog, river pollution, and the creation of nitrous oxide, a greenhouse gas that is some 300 times more potent than carbon dioxide.

Professors leave nearly 11 tonnes of carbon dioxide and 2 kg of nitrogen in annual per-person footprints, the researchers found, while international students leave nearly 4 tonnes of carbon dioxide and 0.5 kg of nitrogen. “We were surprised by those numbers,” says Arsenault.

A 2015 study by a team at the Lanzhou Library of the Chinese Academy of Sciences found that in 2007 the average Canadian emitted about 13 tonnes of carbon dioxide from their household a year. Assuming academics generate similar household emissions, says Arsenault, their professional travel nearly doubles their overall carbon footprint, from 13 to 24 tonnes.

Arsenault expects the academic footprint to be lower in Europe, where academic institutions are grouped closer together, and larger in Australia, where students and professors seeking to network and attend conferences must travel farther. He also believes the footprint is likely to be much lower in developing countries, where there is less funding available for travel.

“We believe travelling is necessary in many cases: for field research, for young researchers who need to secure employment through networking, or for researchers from developing countries who may benefit from presenting their work in international conferences,” Arsenault explains. “However, established academics from developed countries have, in our opinion, a responsibility to reduce their travel or to minimize its environmental impact.”

Earlier this year, researchers at the University of British Columbia found no link between air travel emissions and academic productivity, although they did find a link to salary.

Arsenault recommends that the “internationalization” of someone’s research activities should not be rated so highly when recruiting new staff. He also suggests that universities could require their researchers to buy carbon offsets for air travel, and for shorter distances only provide expenses for travel by rail or bus. But, he adds, “the decision to travel is also an individual one. Researchers need to rethink their travel habits.”

The post Work travel doubles researchers’ carbon footprint appeared first on Physics World.

Model independence

Par : No Author

It’s been an exciting few months for particle physicists. In May more than 600 researchers gathered in Granada, Spain, to discuss the European Particle Physics Strategy, while in June CERN held a meeting in Brussels, Belgium, to debate plans for the Future Circular Collider (FCC). This giant machine – 100 km in circumference and earmarked for the Geneva lab – is just one of several different projects (including those in astroparticle physics and machine learning) that particle physicists are working on to explore the frontiers of high-energy physics.

CERN’s Large Hadron Collider (LHC) has been collecting data from vast numbers of proton–proton collisions since 2010 – first at an energy of 8 TeV and then 13 TeV during its second run. These have enabled scientists on the ATLAS and CMS experiments at the LHC to discover the Higgs boson in 2012, while light has also been shed on other vital aspects of the Standard Model of particle physics.

But, like anything else, colliders have a lifespan and it is already time to plan the next generation. With the information that could be obtained from the LHC Run 3 expected to peak in 2023, a major upgrade has already begun during its current shutdown period. The High-Luminosity LHC (HL-LHC), which will run from the mid-2020s to the mid-2030s, will allow high-precision collisions with a centre-of-mass energy of 14 TeV and gather datasets that are 10 times larger than those of the current LHC.

Particle physicists hope the upgraded machine will increase our understanding of key fundamental phenomena – from strong interactions to electroweak processes, and from flavour physics to top-quark physics. But even the HL-LHC will only take us so far, which is why particle physicists have been feverishly hatching plans for a new generation of colliders to take us to the end of the 21st century.

Japan has plans for the International Linear Collider (ILC), which would begin by smashing electrons and positrons at 250 GeV, but could ultimately achieve collisions at energies of up to 1 TeV. China has a blueprint for a Circular Electron Positron Collider that will reach energies of up to 250 GeV – with the possibility of converting the machine at some later point into a second-generation proton–proton collider. At CERN, two options are currently on the table: the Compact Linear Collider (CLIC) and the aforementioned FCC (see box).

CERN’s Future Circular Collider

Future Circular Collider (FCC)
(Courtesy: CERN)

Along with the Compact Linear Collider (CLIC), CERN has another option for the next big machine in particle physics. The Future Circular Collider (FCC) would require a massive, 100 km new tunnel to be excavated below France and Switzerland – almost four times longer than the current LHC. The FCC would run in two phases. The first would be dedicated to electron–positron collisions (FCC-ee) in the newly built 100 km tunnel, starting in around 2040. The second phase, running from around 2055 to 2080, would involve dismantling FCC-ee and reusing the same tunnel to carry out proton–proton collisions (FCC-hh) at energies of up to 100 TeV. The European Particle Physics update is due to conclude and publish priorities for the field that will eventually inform the CERN council on whether to move forward with the technical design report.

Setting targets

Now, if you’re not a particle physicist, you might be wondering why we want to upgrade the LHC, let alone build even more powerful colliders. What exactly do we hope to achieve from a scientific point of view? And, more importantly, how can we most effectively achieve those intended goals? Having attended the Brussels FCC conference in June, I was struck by one thing. While recent news stories about next-generation colliders have concentrated on finding dark matter, the real reason for these machines lies much closer to home. Quite simply, there is lots we still don’t know about the Standard Model.

A key task for any new collider will be to improve our understanding of Higgs physics (see box below for just one example), allow very precise measurements of a number of electroweak observables, improve sensitivity to rare phenomena, and expand the discovery reach for heavier particles. Electron–positron colliders could, for example, more precisely measure the relevant interactions of the Higgs boson (including interactions not yet tested). Future proton–proton colliders, meanwhile, could serve as a “Higgs factory” – with the Higgs boson becoming an “exploration tool” – to study, among other things, how the Higgs interacts with itself and perform high-precision measurements of rare decays.

Understanding the Higgs mass

Pencil
(Courtesy: iStock/malerapaso)

The Higgs boson was discovered at CERN in 2012, but we’re still not sure why it has such a low measured mass of just 125 GeV. Known as the “naturalness problem”, it’s linked to the fact that the Higgs boson is the manifestation of the Higgs field, with which virtual particles in the quantum vacuum interact. As a result of all these interactions, the Higgs boson squared mass receives additional contributions of energy. But for the Higgs mass to be as low as 125 GeV, the contributions from different virtual particles at different scales have to cancel out precisely. As CERN theoretical physicist Gian Giudice once put it, this “purely fortuitous cancellation at the level of 1032, although not logically excluded, appears to us disturbingly contrived” (arXiv:0801.2562v2). He likens the situation to balancing a pencil on its tip – perfectly possible in principle, but in practice highly unlikely as you have to finetune its centre of mass so that it falls precisely within the surface of its tip. Indeed, Giudice says that the precise cancellations required for the measured mass of the Higgs boson to be 125 GeV is like balancing a pencil as long as the solar system on a tip a millimetre wide.

A further task would be to shed light on neutrinos. Physicists are keen to understand the mechanism generating the masses of the three species of neutrino (electron, muon and tau) that are linked to the physics of the early universe. And then, of course, there’s the physics of the dark sector, which includes the search for a variety of possible dark-matter candidates. Any eventual finding for these in new colliders would have to be combined and cross-checked with data coming from direct dark-matter detection searches and a variety of cosmological data.

More generally, there’s the physics that goes Beyond the Standard Model (BSM). BSM physics includes (but is not limited to) familiar supersymmetric (SUSY) models, in which every boson (a particle with integer spin) has a fermion “superpartner” (with half-integer spin), while each fermion has a boson superpartner. The BSM landscape extends well beyond SUSY, and features a number of possible exotic options, ranging from possible new resonances at high energy to extremely weakly coupled states at low masses.

An interesting methodological approach

But one thing is certain in this bewildering array of unanswered questions: how we approach these challenges matters as much as what kind of machine we should build. And that’s why it’s crucial for particle physicists – and for philosophers of science such as myself – to discuss scientific methodology. Theorists have tried to find a successor to the Standard Model, but it’s still the best game in town. New methodological approaches are therefore vital if we want to make progress.

How we should methodologically approach these unanswered questions matters as much as how we should build the new machines themselves

I can understand why most current research is still firmly grounded in the Standard Model, despite the many theoretical options currently explored for BSM physics. Why jump ship if the vessel’s still going strong, even though we don’t fully understand how it functions? The trick will be to learn how to navigate the vessel in the uncharted, higher-energy waters, where no-one knows if – or where – new physics might be. And this trick has a name: “model independence”.

Model independence, which nowadays is routinely used in particle physics and cosmology, has been prompted by two main changes. The first is the immense wealth of data emerging from particle colliders like the LHC or cosmology projects such as the Dark Energy Survey, as just one example. This new era of “big data” is forcing scientists to do fundamental research in a way that is no longer railroaded along pre-defined paths but is more open-ended, more exploratory and more sensitive to data-driven methods and phenomenological approaches.

The other main factor behind the growth of model-independent approaches has been the proliferation of theoretical models designed to capture possible new BSM physics. Similarly, its increasing popularity in cosmology has been driven by the many different modified-gravity models proposed to tackle open questions about the standard ΛCDM model, which postulates the existence of cold dark matter (CDM) and dark energy (Λ).

But what exactly is model independence? Surely scientific inquiry depends on models all the way down – so how can scientific inquiry ever be independent of a model? Well, first let’s be clear what we mean by models. To understand how, say, a pendulum swings, you model it using the principles of Newtonian physics. But you also have to devise what philosophers of science call “representational models” associated with the theory. In the case of a pendulum, these representational models are built using Newtonian physics to represent specific phenomena, such as the displacement of the pendulum from equilibrium.

The reason why we build such models is to see if a real system matches the representational model. In the case of a pendulum, we do this by collecting data about how it swings (model of the data) and checking for any systematic error in the way it functions (model of the experiment). But if models are so ubiquitous even for something as simple as a pendulum, how can there be any “independence from models” when it comes to particle physics or cosmology? And why does this question even matter?

Model independence matters in fields where research is more open-ended and exploratory in nature. It is designed to “bracket off” – basically ignore – certain well-entrenched theoretical assumptions of the Standard Model of particle physics or the ΛCDM model in cosmology. Model independence makes it easier to navigate your way through uncharted territories – higher energies in particle physics, or modified gravity in the case of cosmology.

To see what this “bracketing-off” means in practice, let’s look at how it’s been used to search for a particular example of BSM physics at the LHC through the phenomenological version of the Minimal SuperSymmetric Model (pMSSM). Like any SUSY model, this model assumes that each quark has a “squark” superpartner and each lepton has a “slepton” superpartner – particles that are entirely hypothetical as of today. Involving only a handful of theoretical parameters – 11 or 19 depending on which version you use – the pMSSM gives us a series of “model points” that are effectively snapshots of physically conceivable SUSY particles, with an indication of their hypothetical energies and decay modes. These model points bracket off many details of fully fledged SUSY theoretical models. Model independence manifests itself in the form of fewer parameters (masses, decay modes, branching ratios) that are selectively chosen to make it easier for experimentalists to look for relevant signatures at the LHC and exclude, with a high confidence level, a large class of these hypothetical scenarios.

In 2015, for example, members of the ATLAS collaboration at CERN summarized their experiment’s sensitivity to supersymmetry after Run 1 at the LHC in the Journal of High Energy Physics (10 134). Within the boundaries of a series of broad theoretical constraints for the pMSSM-19, infinitely many model points are physically conceivable. Out of this vast pool of candidates, as many as 500 million of them were originally randomly selected by the ATLAS collaboration. Trying to find experimental evidence at ATLAS for any of these hypothetical particles under any of these physically conceivable model points is like looking for a needle in a haystack. So how do particle physicists tackle the challenge?

What members of the ATLAS collaboration did was to gradually trim down the sample, step by step reducing the 500 million model points to just over 310,000 that satisfied a set of broad theoretical and experimental constraints. By sampling enough model points, the researchers hoped that some of the main features of the full pMSSM might be captured. The final outcome of this sampling takes the form of conceivable SUSY sparticle spectra that are then checked against ATLAS Run 1 searches. And as more data were brought in at LHC Run 2, more and more of these conceivable candidate sparticles were excluded, leaving only live contenders (which nonetheless remain purely hypothetical as of today).

In other words, instead of testing a multitude of fully fledged SUSY theoretical models one by one to see if any data emerging from LHC might support one of them, model independence recommends looking at fewer, indicative parameters in simplified models (such as pMSSM-19). These are models that have been reduced to the bare bones, so to speak, in terms of theoretical assumptions, and are therefore more amenable to being cross-checked with empirical data.

The main advantage of model independence is that if no data are found for these simplified models, an entire class of fully fledged and more complete SUSY theoretical models can be discarded at a stroke. It is like searching for a needle in a haystack without having to turn and twist every single straw, but instead being able to discard big chunks of hay at a time. This is, of course, only one example. Model independence manifests itself more profoundly and pervasively in many other aspects of contemporary research in particle physics: from the widespread use of effective field theories to the increasing reliance on data-driven machine-learning techniques, just to mention two other examples.

Cosmological concerns

Model independence has led to a controversy surrounding cosmological measurements of the Hubble constant, which tracks the expansion rate of the universe. The story began in 2013 when researchers released the first data from the European Space Agency’s Planck mission, which had been measuring anisotropies in the cosmic microwave background since 2009. When these data were combined with the ΛCDM model of the early universe, cosmologists found a relatively low value for the Hubble constant, which was confirmed by further Planck data released in 2018 to be just 67.4 ± 0.5 km/s/Mpc.

Problems arose when estimates for the Hubble constant were made using data from pulsating Cepheid variable stars and supernovae Ia exploding stars, which offer more model-independent probes for the Hubble constant. These model-independent measurements led to a revised value of the Hubble constant of 73.24 ± 1.74 km/s/Mpc (arXiv:1607.05617). Additional research has only further increased the “tension” between the value of the Hubble constant from Planck’s model-dependent early-universe measurements, and more model-independent late-universe probes. In particular, members of the H0liCOW (H0 Lenses in COSMOGRAIL’s Wellspring) collaboration – using a further set of model-independent measurements of quasars gravitationally bending light from distant stars – have recently measured the Hubble constant at 73.3 ± 1.7 km/s/Mpc.

And to complicate matters still further, in July Wendy Freedman from the University of Chicago and collaborators used measurements of luminous red giant stars to give another new value of the Hubble constant at 69.8 ± 1.9 km/s/Mpc, which is roughly halfway between Planck and the H0LiCOW values (arXiv:1907.05922). More data on these stars from the upcoming James Webb Space Telescope, which is due to launch in 2021, should shed light on this controversy over the Hubble constant, as will additional gravitational lensing data.

Let’s get philosophical

Model independence is an example of what philosophers like myself call “perspectival modelling”, which – metaphorically speaking – involves modelling hypothetical entities from different perspectives. It means looking at the range of allowed values for key parameters and devising exploratory methods, for example, in the form of simplified models (such as pMSSM-19) that scan the space of possibilities for what these hypothetical entities could be. It is an exercise in conceiving the very many ways in which something might exist with an eye to discovering whether any of these conceivable scenarios is in fact objectively possible. Ultimately, the answer lies with experimental data. If no data are found, large swathes of this space of possibilities can be ruled out in one go following a more data-driven, model-independent approach.

As a philosopher of science, I find model independence fascinating. First, it makes clear that philosophers of science must respond to – and be informed by – the specific challenges that scientists face. Second, model independence reminds us that scientific methodology is an integral part of how to tackle the challenges and unknowns lying ahead, and advance scientific knowledge.

Model independence is becoming an important tool for both experimentalists and theoreticians as they plan future colliders. The Conceptual Design Report for the FCC, for example, mentions how model independence can help “to complete the picture of the Higgs boson properties”, including high-precision measurements of rare Higgs decays. Such model-independent searches are a promising (albeit obviously not exclusive or privileged) methodological tool for the future of particle physics and cosmology. Wisely done, the scientific exercise of physically conceiving particular scenarios becomes an effective strategy to find out what there might be in nature.

The post Model independence appeared first on Physics World.

Greenland’s ecosystems change abruptly with climate

Par : No Author
  • This story is part of Covering Climate Now, a global collaboration of more than 250 news outlets to strengthen coverage of the climate story. 

An international team has discovered a host of signals in West Greenland – including ice out date and changes in plant growth – that indicate the region’s environment responds abruptly to changes in climate.

The observations appear to undermine climate adaptation strategies, which rely on there being a time lag between changes in the climate and the resulting changes in the environment.

“We are seeing fast shifts in ecosystem responses,” says Jasmine Saros at the University of Maine, US. “This makes it an even greater challenge to know how to anticipate and avoid [them].”

The Arctic is the most rapidly warming region of the planet and has a big influence on many climatic and environmental processes elsewhere. Over the past 150 years, the Arctic has warmed between two and three times faster than the global average.

In Greenland between 2007 and 2012, mean annual air temperatures were 3 °C higher than in the two decades up to 2000. Meanwhile, the area of Kangerlussuaq, West Greenland, exhibited no warming for most of the 20th century then suddenly started warming after the mid-1990s.

Saros and colleagues took Kangerlussuaq as an ideal place to quantify the ecological effects of very rapid warming. They analysed monitoring data and environmental “archives” such as lake sediment cores and shrub rings for the past 40 years. In the latter half of this period there were two jumps in the data: from 1994, mean June air temperatures rose 2.2 °C while mean winter rainfall doubled; then from 2006, mean July air temperatures rose 1.1 °C.

Both these climate jumps saw concurrent or only slightly delayed environmental responses. For instance, in the 1990s the seasonal loss of lake ice shifted six days earlier, and the date by which half of all plant species had come into growth moved 10 days earlier. In the early 2000s, this initiation of plant growth shifted another 13 days earlier, while discharge from the Greenland Ice Sheet rose 50%.

A little later, lakes became clearer and warmer, which the researchers believe will have driven a rise in bottom-dwelling algae. “That shift is important because it can change the nutrient and carbon cycle in lakes, and ultimately across the landscape,” says Saros.

“We were surprised because previous research typically revealed that ecosystem responses to rapid climate change are often delayed or dampened by dynamics and interactions within ecosystems,” says Saros. “In this case, however, we found that Arctic systems responded simultaneously with, or shortly after, these climate shifts.”

Saros believes that shorter growing seasons, simpler ecosystems and lower biodiversity could all contribute to the sensitivity of Arctic ecosystems to rapid climate change.

“Our results have implications for sea level rise, ocean salinity, and carbon cycling – all environmental changes with far reaching consequences,” she says.

The team reported the findings in Environmental Research Letters (ERL).

The post Greenland’s ecosystems change abruptly with climate appeared first on Physics World.

Dealing with a climate emergency

Par : No Author

The Intergovernmental Panel on Climate Change published a report last year that starkly laid out what was needed to limit global warming to 1.5 °C. It pointed to the overwhelming evidence that irreversible climate change was already occurring and that many of the changes were happening faster than previously thought. At the report’s heart, however, was a message of hope and optimism – we can regain some control and avert disaster if we act quickly.

While we all need to act individually, we also feel that physics as a discipline needs to come together to avoid the impending climate disaster. We therefore call on the community – students, scientists, industrialists, publishers and funders – to declare a climate emergency and commit to placing emissions reductions at the heart of our work. This means placing emissions reductions at the heart of everything we do. For inspiration we should look to the Pugwash movement, which sought a world free from weapons of mass destruction, and to the founding ideals of organizations like CERN, which harnessed the collaborative, evidence-based approach of physics to deliver peace and prosperity.

Climate Change chart
Accelerating away Top: carbon-dioxide (CO2) emissions in parts-per-million (ppm) recorded by the US National Oceanic and Atmospheric Administration (NOAA) up to 2010 (blue) and beyond (red), alongside emissions calculated in 2010 by a model (green) that assumes the world continues in a “business-as-usual” (BAU) way. Bottom: the difference between the NOAA data and the BAU model reveals that emissions are not only rising – but doing so even faster than the 2010 BAU model predicted.

Many physicists are already working on the science and technology of emissions reduction – and this effort will continue to grow. We are instead concerned with community-wide action that changes the way we work and demonstrates to the wider public that global, collaborative activities like science may be sustainably carried out. It is essential that physics plays its part in a wider and growing call to action from across the scientific community.

In late August, the climate activist Greta Thunberg crossed the Atlantic via a zero-emissions sailboat to speak at the UN Climate Action Summit on 23 September. She travelled that way to draw attention to the environmental cost of air travel, which many of us ignore. We’re all familiar with the senior scientist who jets in to give a conference talk before leaving for another event that evening or the next day. Given the carbon cost, it is hard to argue that this model of nomadic superstars who spend their summer in airports is justified, especially in an era where live-streamed TED talks can be watched by millions. In fact, a recent study by the University of British Columbia in Canada suggests that, beyond a low minimum level, more travel does nothing to improve scientific productivity. To put things into perspective, a recent investigation by the leading research Swiss institution ETH Zurich found that flights accounted for a staggering 50% of its emissions. Clearly urgent action is needed.

Physicists – who led the world developing better ways to collaborate, from the telegram to the World Wide Web – should show leadership when it comes to cutting their travel. Making more use of online technology at physics conferences would also have wider benefits, such as allowing people who have to care for family members – or who find it hard to travel – to take part remotely. It would also help physicists from countries with less funding for science. A grassroots campaign to cut the amount of academic travel has been running since 2015.

We have already begun to ask organizers about giving our talks remotely, stimulating high-level discussions with the American Physical Society and at leading US universities. As a result, one of us will trial a “virtual visit” to Harvard University and the Massachusetts Institute of Technology this winter, which will include remote presentations and discussions. Indeed, at a recent meeting we hosted at Durham University in the UK, the stand-out talk was delivered remotely from a national laboratory in the US, demonstrating the potential for high-quality scientific collaboration that does not compromise the quality of the meeting.

As well as action at an individual level, we must also seek policy changes from funding bodies, learned societies and hiring committees. For example, rules set by UK Research and Innovation – the umbrella organization of the seven UK research councils – currently favour the cheapest (rather than the most carbon efficient) means of travel and expressly forbid the use of funds to pay for emissions offsetting. In 2004 Kevin Anderson a climate scientist from the University of Manchester, UK, proposed the idea of a “carbon credit card” to properly account for carbon emissions. Such ideas could enable funding agencies to cut the number of international conferences we attend, reducing our dependency on air travel.

We also recommend that sustainability should become an explicit criterion when funding bodies assess grant applications, on a similar level to ethical considerations and impact. So any attempts to assess the academic and societal impact of our research and teaching – such as the UK’s Research Excellence Framework – should include an assessment of its climate impact too.

Some might argue that any change we make is a drop in the ocean. The same, of course, is true of most of our individual contributions to scientific progress. Yet physics shapes all our futures. Let us use this incredible privilege to act on climate change and hand our children a world where they can still follow their physics dream.

The post Dealing with a climate emergency appeared first on Physics World.

Laser-cooled ions implant deterministic colour centres

A new technique for reliably inserting single-ion impurities into a crystal and with a precision of just tens of nanometres could help in the development of quantum devices such as quantum computers or quantum simulators. The approach involves using a source of laser-cooled praseodymium ions to fabricate arrays of praseodymium colour centres in synthetic crystals of yttrium aluminium garnet (YAG). 50% of the impurities fluoresce – a success rate that is comparable to techniques that require ion energies three orders of magnitude higher.

Solid-state materials containing impurities such as nitrogen-vacancy colour centres or single rare-earth ions are a promising way to make scalable quantum information processors. The quantum states of these impurities can be tailored by laser and microwave pulses to perform quantum logic operations and the states read out by measuring their fluorescence. Precisely introducing ordered arrays of such impurities into crystals for scaling up quantum processors has proved difficult, however.

Deterministic single ion implantation with high placement precision

“Our technique allows for deterministic single ion implantation in a solid-state material with high placement precision,” explains Karin Groot-Berning of the Johannes Gutenberg University Mainz, who led this research effort. “The added advantage is that it can be applied to a large range of materials, doping ions and implantation energies. We believe it paves the way to the scalable fabrication of qubit arrays, such as those made of phosphorus qubits in ultra-pure silicon, for example. Being able to precisely place these arrays of single atoms in solids is an important step towards making quantum devices in which the arrays are the quantum register.”

Paul trap

Groot-Berning and colleagues began by loading and trapping a single praseodymium (Pr) ion and a single calcium (Ca) ion in a Paul trap. The Ca ion is laser-cooled so that the wave packet of the sympathetically cooled Pr ion also becomes very small (well below 100 nm in size), she explains.

The researchers extract both ions by applying an electric field. They “blank” away the Ca ion, but steer the Pr ion into a lens and focus it down to a spot size of about 30 nm.

“The Pr ion then hits the surface of a YAG crystal, which is a synthetic crystal commonly employed as a lasing medium, with a speed of 73 km/s and it enters the material to a depth of about 6 nm,” says Groot-Berning. “We repeat this procedure to inject a succession of Pr ions into the crystal.”

Forming a colour centre

“The crystal can be moved with a piezo-translation state and we can implant any pattern,” she tells Physics World. “We perform the measurements in an ultrahigh vacuum apparatus in Mainz, where we can trap, cool and extract the ions. Finally, we take the YAG out of the apparatus and send the sample to our colleagues at the Physical Institute of the University of Stuttgart. Here, they flash-heat the crystal to 1200°C such that the Pr ions replace the yttrium ions in the crystal lattice, thus forming a colour centre.”

The researchers then use a set of lasers to excite the array so that it emits photons, which they can detect with a confocal microscope. They found that they could control the position of the Pr ions to a precision of 34 nm and that up to 50% of the colour centres fluoresced.

Precision could be improved further

This precision could be improved further, because it is currently limited by imperfect cooling and mechanical vibrations, says Groot-Berning. “Indeed, we have already started to work on this problem and improve the mechanical stability of our set up in a second-generation implanter.”

The team, reporting its work in Physical Review Letters, says that it now plans to use its technique to implant phosphorus ions into silicon to form arrays of quantum bits. “We also plan to investigate implanting bismuth ions, for coupling their nuclear spins to superconducting quantum bit devices, and cerium ions into the YAG crystal, because this rare earth ion allows for super-resolution microscopy (STED),” reveals Groot-Berning. “This will allow us to fabricate qubit devices using these ions and detect them with even better resolution – down to a few nm.

“Our current placement accuracy is already sufficient for fabricating quantum devices, however,” she stresses.

The post Laser-cooled ions implant deterministic colour centres appeared first on Physics World.

Investigations launched over Hurricane Dorian weather map row

Par : No Author

Scientists have criticized the US government for politicizing weather forecasts from the National Weather Service (NWS) following a dispute over the potential path of Hurricane Dorian. The acting chief scientist of the National Oceanic and Atmospheric Administration (NOAA), of which the NWS is part, is investigating whether the agency violated its policies and ethics over the issue. Meanwhile, the Democratic-led House of Representative’s committee on science, space and technology has announced its own investigation into the matter.

Dorian became a category five hurricane on 1 September just before making landfall on the Bahamas. Late in August, some of the charts created by the NWS indicated a small chance that a part of Alabama would experience high winds from the hurricane. On 1 September, President Trump tweeted that the state “would most likely be hit (much) harder than anticipated”. By the time of Trump’s tweet, however, the hurricane had swung north with NWS charts showing no impact on Alabama.

[The NWS] should be celebrated for communicating accurate information so important to the public

Alan Leshner

Responding to panicked calls from state residents, the branch of the NWS in Birmingham, Alabama, quickly tweeted that the state “will NOT see any impacts” from the hurricane. But the president refused to admit that he had been wrong. As proof, he revealed an NWS projection of Dorian’s cone of uncertainty together with an extra semicircle that was apparently drawn by a Sharpie pen. The added area covered the southeastern segment of Alabama, with Trump admitting that he did not know who added the semicircle.

According to the Washington Post, NOAA staff were instructed to “stick with official National Hurricane Center forecasts” in response to questions about the issue and not to “provide any opinion” on the President’s tweets. But an unsigned press release from NOAA, dated 6 September, stated that the agency had informed Trump “that tropical-storm-force winds from Hurricane Dorian could impact Alabama”. The release also excoriated the Birmingham NWS because, it stated, its tweet denying Trump’s information “spoke in absolute terms that were inconsistent with probabilities from the best forecast products available at the time”.

On 10 September, the New York Times reported that this press release was issued because commerce secretary Wilbur Ross had threatened to sack top NOAA personnel for allowing the NWS to contradict the president. Ross’s staff, however, have denied the threat. Meanwhile, the NOAA’s acting administrator Neil Jacobs told a meeting of weather forecasters in Alabama that no-one’s job was under threat with NWS director Louis Uccellini praising his Birmingham office for upholding “the integrity of the forecasting process”. Democratic Congressman Bill Foster of Illinois, the only physicist in Congress, tweeted that if the New York Times report is true then Ross “must resign” – a call that has also been made by at least one other Congressperson.

Safety first

Scientists have come out in support of the NWS. Alan Leshner, interim chief executive of the American Association for the Advancement of Science, says that the NWS “should be celebrated for communicating accurate information so important to the public [and] not asked to change a weather forecast in reaction to any political pressure”. The American Meteorological Society notes in a statement that it “fully supports” the NOAA “who consistently put the safety of the American public first and foremost”.

Meanwhile, NOAA’s acting chief scientist, Craig McLean, is investigating whether the agency’s unsigned statement violated the agency’s policies and ethics. In an e-mail message to staff, he called the NOAA’s response to the issue a “danger to public health and safety”.

The post Investigations launched over Hurricane Dorian weather map row appeared first on Physics World.

Graphene boosts microscope resolution by a factor of 10

Par : No Author

Sub-nanometre resolution in 3D position measurements of light-emitting molecules has been achieved by physicists in Germany. Jörg Enderlein and colleagues at the University of Göttingen achieved the result by replacing metal films used in previous super-resolution techniques with single layers of graphene. Their innovation could allow researchers in a wide variety of fields to measure molecular positions to unprecedented degrees of accuracy.

Recently, the technique of single-molecule localization super-resolution microscopy (SMLM) has become an incredibly useful tool for researchers in fields ranging from fundamental physics to medical research. By analysing images of single light-emitting molecules, researchers can pinpoint the positions of their centres to within single atomic widths. However, SMLM faces one significant shortcoming: it can only locate molecules in 2D, giving no information about their positions along the out-of-plane axis.

This problem can be partially overcome through the technique of metal-induced energy transfer (MIET), which introduces a thin metal film to the setup. The idea is that the apparatus picks up changes in the molecule’s fluorescence that are caused by the molecule coupling to collective excitations of surface plasmons in the film. Since this light emission varies with distance from the film, researchers can use MIET to calculate the molecule’s distance relative to the film surface, allowing them to locate it along the third axis. Yet with current versions of the technique, the accuracy of this out-of-plane measurement is 3–5 times worse than that of lateral localization, in the plane of the film.

Atomic-scale resolution

Enderlein’s team aimed to improve this accuracy by replacing the metal film with graphene, which is a film of carbon just one atom thick. This setup also results in distance-dependent fluorescence through coupling between the graphene and the emitter. This time, however, the spatial resolution in the out-of-plane direction is 10 times better than MIET. For the first time, this setup, called graphene-MIET (gMIET), allowed for measurements of molecular positions to resolutions of one angstrom – 10–10 m or 0.1 nm, which is roughly the “size” of an atom in a solid or molecule.

The researchers demonstrated this super-resolution by measuring the thicknesses of single lipid bilayers – the two, opposite facing, 2D films of tadpole-shaped molecules which form cell membranes. By localizing fluorescent dyes attached to the heads of the molecules in each bilayer, relative to a graphene film, Enderlein and colleagues estimated a membrane thickness of around 5 nm. This result is remarkably consistent with the known value. With further improvements, the team believes that gMIET could be used to resolve distances between individual molecules; more complex groups of molecules; and small cellular structures, with sub-nanometre accuracy.

The technique is described in Nature Photonics.

The post Graphene boosts microscope resolution by a factor of 10 appeared first on Physics World.

European Society of Radiology seeks to demystify biomarkers

Par : No Author
© AuntMinnieEurope.com
© AuntMinnieEurope.com

The European Society of Radiology (ESR) has issued a detailed set of recommendations designed to promote the understanding and use of validated imaging biomarkers as decision-making tools in clinical trials and routine practice.

The 16-page document produced by the European Imaging Biomarkers Alliance (EIBALL), and endorsed by the ESR’s Executive Council, was published on 29 August in Insights into Imaging. EIBALL is a subcommittee of the ESR’s Research Committee, and its mission is to facilitate imaging biomarker development and standardization as well as promote their use in clinical trials and in clinical practice by collaboration with specialist societies, international standards agencies and trials organizations (Insights into Imaging 10.1186/s13244-019-0764-0).

“Both radiologists and clinicians are wary about using biomarkers,” lead author Nandita deSouza told AuntMinnieEurope.com. “They are often acquired with very different imaging protocols, which make the quantitation across sites and equipment variable. Understanding this variability and the evidence for appropriate biomarker use would greatly help those who wish to incorporate these quantitative techniques into research or clinical use to make decisions when faced with individual patients.”

Multimodality imaging
Multimodality imaging of the skeleton shows secondary deposits in bone. Diffusion-weighted MRI (far right image) is a quantitative technique from which a biomarker called the apparent diffusion coefficient can be derived either from specifically segmented regions or from the whole skeleton. (Courtesy: Nandita deSouza)

Quantitation is going to increase as artificial intelligence (AI) comes on line, and making sure it is robust and meaningful is going to be hugely important, added deSouza, who is a professor in translational imaging and co-director of the Cancer Research UK Clinical Magnetic Resonance Research Group at the Institute of Cancer Research.

EIBALL is developing a web-based biomarkers inventory that will be available to anyone on the ESR website. EIBALL works with and seeks endorsement by specialist societies such as the European Society of Gastrointestinal and Abdominal Radiology (ESGAR), the European Society of Gynaecological Oncology (ESGO) and the European Society for Breast Imaging (EUSOBI) for creating this inventory.

Also, EIBALL is working closely with its North American counterpart, the Quantitative Imaging Biomarkers Alliance (QIBA). The two groups both work towards setting benchmarks for imaging biomarker quantitation. They meet regularly to contribute to each other’s work and ensure their goals are aligned.

“The ESR strongly supports this process, and gives EIBALL a platform for presenting developments at ECR every year,” she pointed out.

Need for harmonization

In an era of machine learning and AI, it is vital to extract quantitative biomarkers from medical images that inform on disease detection, characterization, monitoring and assessment of response to treatment. Quantitation can provide objective decision-support tools in patient management, but the quantitative potential of imaging remains underexploited because of variability of the measurement, lack of harmonized systems for data acquisition and analysis, and crucially, a paucity of evidence on how such quantitation potentially affects clinical decision-making and patient outcome, according to the authors of the EIBALL report.

Having looked at the use of semiquantitative and quantitative biomarkers in clinical settings at various stages of the disease pathway – including diagnosis, staging and prognosis, as well as predicting and detecting treatment response – they feel strongly that measurement variability needs to be understood and systems for data acquisition and analysis harmonized before using quantitative imaging measurements to drive clinical decisions.

Semiquantitative readouts of scores based on an observer-recognition process are useful here. For example, MRI scoring systems for grading hypoxic-ischemic injury in neonates using a combination of T1-weighted imaging, T2-weighted imaging and diffusion-weighted imaging have shown that higher postnatal grades were associated with poorer neurodevelopmental outcome, the authors noted.

In cervical spondylosis, grading of high T2-weighted signal within the spinal cord has been related variably to disease severity and outcome. In common diseases such as osteoarthritis, where follow-up scans to assess progression are vital in treatment decision-making, such scoring approaches also are useful; web-based knowledge transfer tools using the developed scoring systems indicate good agreement between readers with both radiological and clinical background specialisms in interpreting the T2-weighted MRI data.

“MRI is more versatile than ultrasound and CT,” they wrote. “It can be manipulated to derive a number of parameters based on multiple intrinsic properties of tissue (including T1- and T2-relaxation times, proton density, diffusion and water-fat fraction) and how these are altered in the presence of other macromolecules (e.g., proteins giving rising to magnetization transfer and chemical exchange transfer effects) and externally administered contrast agents (gadolinium chelates).”

Perfusion metrics have been derived with arterial spin labelling, which does not require externally administered agents. The apparent diffusion coefficient is the most widely used metric in oncology for disease detection, prognosis and response evaluation. Postprocessing methods to derive absolute quantitation are extensively debated, but the technique is robust with good reproducibility in multicentre, multivendor trials across tumour types, according to deSouza and colleagues.

Hybrid imaging

Quantitation of F-18 FDG PET/CT studies is mainly performed by standardized uptake values (SUVs), although other metrics such as metabolic active tumour volume and total lesion glycolysis are being introduced in research and the clinic.

“The most frequently used metric to assess the intensity of FDG accumulation in cancer lesions is, however, still the maximum SUV,” they continued. “SUV represents the tumour tracer uptake normalized for injected activity per kilogram body weight. SUV and any of the other PET quantitative metrics are affected by technical (calibration of systems, synchronization of clocks and accurate assessment of injected F-18 FDG activity), physical (procedure, methods, and settings used for image acquisition, image reconstruction and quantitative image analysis) and physiological factors (FDG kinetics and patient biology/physiology).”

To mitigate these factors, guidelines have standardized imaging procedures and harmonized PET/CT system performance at a European level. Newer targeted PET agents are only assessed qualitatively on their distribution.

Future challenges

To become clinically useful, biomarkers must be rigorously evaluated for their technical performance, reproducibility, biological and clinical validity and cost-effectiveness, the authors wrote.

Technical validation establishes whether a biomarker can be derived reliably in different institutions and on widely available platforms. Provisions must be made if specialist hardware or software is required, or if a key tracer or contrast agent is not licensed for clinical use, they stated. Reproducibility is very rarely demonstrated in practice because inclusion of a repeat baseline study is resource and time intensive. Multicentre technical validation using standardized protocols needs to be addressed after initial biological validation. Quantitative biomarkers can then be clinically validated, showing that the same relationships are observed in patients.

Increasingly, the role of imaging in the context of other non-imaging biomarkers needs to be considered as part of a multiparametric healthcare assessment. The integration of imaging biomarkers with tissue and liquid biomarkers is likely to replace many traditional and more simplistic approaches to decision-support systems.

“In an era of artificial intelligence, where radiologists are faced with an ever-increasing volume of digital data, it makes sense to increase our efforts at utilizing validated, quantified imaging biomarkers as key elements in supporting management decisions for patients,” they concluded.

  • This article was originally published on AuntMinnieEurope.com ©2019 by AuntMinnieEurope.com. Any copying, republication or redistribution of AuntMinnieEurope.com content is expressly prohibited without the prior written consent of AuntMinnieEurope.com.

The post European Society of Radiology seeks to demystify biomarkers appeared first on Physics World.

Dosimetry software captures daily, cumulative patient trends

Intensity-modulated radiotherapy (IMRT) and variants such as volumetric-modulated arc therapy (VMAT) have proven to be game-changers in cancer treatment over the past decade, delivering precise and highly conformal “dose painting” of complex tumour sites while minimizing collateral damage to healthy tissue and nearby organs at risk. While the benefits are clear, the complexities of IMRT/VMAT treatment planning and delivery are such that a redoubled focus on all aspects of quality assurance (QA) is essential – not least in terms of patient-related QA to ensure that dose delivery remains within tolerance versus the original simulation and treatment plan.

Standard Imaging, a US-based provider of QA solutions for radiation oncology, believes that its Adaptivo patient dosimetry software ticks a lot of those patient QA boxes by providing a granular view into the daily and cumulative dose delivered as patient geometry, set-up and tumours change during the course of treatment. What’s more, the software automatically imports and analyses patients, presents data in a summary dashboard, and sends alerts for dose deviations that require attention from the oncologist.

Problem-solving

“The problem Adaptivo addresses is the gaping hole in treatment delivery in most radiation oncology clinics,” explains Shannon Holmes, staff medical physicist at Standard Imaging. “What’s missing is that day-to-day information to understand the impact of various geometric changes in the patient’s anatomy [e.g. weight loss, tumour shrinkage] or patient positioning on the overall quality of the treatment,” she adds. “Put simply, are we hitting the target and are we doing it in the way we intended in the treatment plan?”

Shannon Holmes: “Having the data to assess patient changes gives confidence that you are delivering high-quality treatments.” (Courtesy: Standard Imaging)

To align with existing clinical workflows, Adaptivo’s functionality is organized into three core building blocks. The Pre-treatment module verifies IMRT and VMAT delivery using the treatment system’s portal imager, streamlining pre-treatment QA without the need for phantoms or additional detectors. The software communicates directly with the record and verify (R&V) system and automatically compares measured results to the predicted image (with email notification of either each pre-treatment delivery or only those that fail acceptance criteria).

The In Vivo module, meanwhile, provides daily exit-dose monitoring to identify unforeseen deviations from the treatment plan, performing portal-to-calculated and portal-to-portal comparisons of per-beam metrics, per-fraction metrics and gamma metrics. The software’s third module, Adaptive, provides daily and cumulative 3D dose analysis, automatically mapping the original planned contours to daily cone-beam CT images (or the most recent cone-beam image set). This deformable registration ensures that changes in tumour size and patient weight loss, for example, are factored into both daily and cumulative dose and dose volume histogram (DVH) tracking.

Clinical upside

So what are the main operational benefits of Adaptivo for medical physicists and the wider radiation oncology team? “I would say insight first and foremost,” notes Holmes, citing the access the software provides to daily information about patient set-up and anatomy variations and the impact on dose distributions. “Having the data to assess those patient changes gives confidence that you are delivering high-quality treatments,” she explains. “I guess there’s also the issue of having the hard data to support decision-making if you need to replan a patient – or whether, despite patient weight loss for example, your treatment plans are still robust.”

Workflow efficiencies also figure prominently, with Holmes noting that Adaptivo’s automated data input and analysis allows medical physicists to focus their time on tasks that align with their abilities and training. “With Adaptivo, a physicist would be analysing results rather than just transferring files and hitting ‘calculate’,” she explains. “The software also makes chart-checks more meaningful because you’re no longer just looking at whether the number of MU [monitor units] delivered matches the number of MU that were planned. You can actually understand the impact on your patient: was the dose distribution as expected, was the patient set-up in alignment, was the daily treatment in line with what was expected.”

Radiation oncologists can quickly judge whether a replan is needed, focusing on those plans that truly require altering and expediting the approval process

Shannon Holmes

It’s the insights that Adaptivo provides – highlighting daily and cumulative dose deviations or trends – that’s the big differentiator in terms of patient QA and enhanced treatment outcomes. Consider head-and-neck cancer, a clinical indication that commonly requires treatment replanning during a course of radiation therapy. “For head-and-neck patients,” explains Holmes, “it often hurts to eat during treatment, so the patient loses weight and there’s often significant anatomical change in a location that affects the attenuation of the treatment beam.”

Within Adaptivo, those anatomical changes manifest as higher exit doses in the In Vivo module, an indicator that attenuation is decreasing. In this scenario, the 3D dose calculations in the Adaptive module will display how the daily doses on the cone-beam CT are changing as the patient loses weight, mapping that daily data back onto the planning CT to give a cumulative delivered dose distribution.

In turn, says Holmes, “the Adaptive module will actually generate a predictive, cumulative dose flag that tells you, based on how you’ve been treating so far, whether you’re going to be outside the tolerance you’ve set yourself by the end of the treatment – and if so, how far out you’re going to be.”

Holmes concludes: “It’s this complete view of delivered dose that gives the data and confidence needed to validate any replanning. Radiation oncologists can quickly judge whether a replan is needed, focusing on those plans that truly require altering and expediting the approval process.”

Standard Imaging released Version 1.3 of Adaptivo earlier this summer and will be showcasing the latest features of the software at the ASTRO Annual Meeting this week. Version 1.3 enhancements include compatibility with ARIA version 15 (Varian’s R&V system) and pretreatment QA functionality for 10 MV beams. Disease-specific gamma criteria can now be applied for both the Pre-treatment and In Vivo analysis modules, while there’s also a representative beam data option for In Vivo commissioning.

  • Standard Imaging will be exhibiting at booth 1435 during the ASTRO Annual Meeting in Chicago, IL, from 15-17 September.

The post Dosimetry software captures daily, cumulative patient trends appeared first on Physics World.

Physics World joins Covering Climate Now project

Physics World Environment and Energy is pleased to announce that it is participating in the Covering Climate Now media initiative in the run-up to the UN Climate Action Summit in New York on 23 September.

More than 220 media outlets, including Physics World, have committed to increase their coverage of climate change in the eight days before the meeting. The initiative officially begins today; from tomorrow we aim to bring you two pieces of climate coverage each day, more than double our normal offering. Watch out for a physicist-led call to action, news on the latest climate research, a climate takeover of our weekly podcast, the challenges of managing retreat from climate change, and more. From university academics’ carbon footprints to what climate scientists think about Extinction Rebellion and other campaigning groups, we’ve got it covered.

At the Climate Action Summit, governments will submit their plans for reaching the Paris Agreement goal of keeping global temperature rise well below 2 °C.

“The need for solid climate coverage has never been greater,” says Kyle Pope, editor and publisher of CJR, which founded the Covering Climate Now project along with The Nation. “We’re proud that so many organizations from across the US and around the world have joined with Covering Climate Now to do our duty as journalists – to report this hugely important story.”

Outlets participating in Covering Climate Now will “run as much high-quality climate coverage as they can – and thereby signal to their audiences the paramount importance of the climate story”, according to CJR.

Recent Physics World coverage of climate change includes making labs more sustainable, a special climate edition of the Physics World Stories podcast, and the impacts of flying on academic productivity. We’re looking forward to bringing you more of the same high-quality content in the week to come. For years, climate scientists, many with a physics background, have carefully observed and modelled the changes to and outlook for our planet. Slowing and adapting to the climate change they’ve projected from our greenhouse gas emissions to date and in the future will take many minds and much innovation; we hope our audience can contribute to that story too.

The post <i>Physics World</i> joins Covering Climate Now project appeared first on Physics World.

Eel delivers record-breaking voltage, pricey helium grounds Boris Johnson blimp, did top journals ban quantum foundations?

Were papers on quantum foundations banned from the Physical Review series of journals in the latter part of the 20th century? That is a recurring claim by some in the field – made most recently by Sean Carroll in an article in The New York Times about quantum mechanics.

The mathematical physicist Peter Woit has investigated these claims, which seem to focus on policies set down by Sam Goudsmit – who was long-time editor-in-chief of the journal series until 1974.

Did Woit find any evidence for a banning policy? Not really – Woit seems to conclude that Goudsmit was annoyed with having to deal with poor quality theory papers in general.

You can read more in his blog post “Regarding papers about fundamental theories”.

Biologists have discovered a new species of electric eel that can deliver a whopping 860 V, smashing the previous record 650 V. Called Electrophorus voltaic, the fish lives in the Brazilian Shield, which is a highland region of the Amazon basin. The scientists reckon that the eel has evolved the capability to deliver such a high voltage because the water it lives in is a relativity poor conductor of electricity.

Much like his premiership over the past week, a satirical blimp depicting Boris Johnson has been deflating. The problem is the high cost of helium, which meant that the blimp was Earth-bound and inflated with air for a protest in London. Just as well, with a looming helium shortage I can think of better uses for the rare gas.

The post Eel delivers record-breaking voltage, pricey helium grounds Boris Johnson blimp, did top journals ban quantum foundations? appeared first on Physics World.

Planet-eating star could be lurking in nearby cluster

Par : No Author

An unusual set of chemical fingerprints spotted in the light from a distant star could be the remnants of a digested planet, according to a new study by researchers in Sweden. In 2017 astronomers making spectroscopic observations of several stars in the open star cluster Messier 67 spotted one – dubbed Y2235 – with elevated levels of certain elements on its “surface”. These included carbon, magnesium and oxygen as well as heavier elements such as cerium, iron and yttrium.

Now, Ross Church, Alexander Mustill and Fan Liu at Lund University have explored how the unusual spectral signature could have arisen. Through calculating the amounts of these elements dusted over the star’s roiling surface, and then modelling how a planet could have delivered them there, Church and his colleagues conclude that what we are seeing are the scattered remains of a planet that is roughly five to six times more massive than Earth.

Chemical fingerprints of the kind detected in the light from Y2235 can be created by means unrelated to planetary destruction. These include the churning effect of a star’s rapid rotation, or gravitational interactions with another star. However, Y2235 is an ageing, Sun-like star that is not thought to spin fast enough for this mixing to take place. Furthermore, the star shows no sign of having a companion star says Church. “A third possibility is that two stars might have collided to have formed Y2235, but that is very unlikely even in a cluster like M67 – only in the richest stellar clusters are direct stellar collisions common,” he adds.

Multi-course meal

The team’s modelling of how the elements might have arrived onto Y2235’s surface suggests the process would have been something of a multi-course meal rather than a quick, stellar snack. In a typical scenario, the planet would have spiralled ever closer to Y2235 over more than six years, only being devoured in its entirety after several hundred orbits.

What could have sent the planet in towards the star is not known, but it might have been gravitational jostling with another planet in the Y2235 system. “Our Solar System is very stable, which is good for us, since the planets have remained on very similar orbits for the last 4.5 billion years or so,” explains Church. “The main reason for this is that the planets are far enough apart that they do not interact with each other much – they orbit mostly independently round the Sun. In a planetary system where planets are closer together they can interact more, and ultimately scatter off one another, and one of the effects of this can be to scatter planets into the star.”

“[The team’s] hypothesis matches the data pretty well,” says Hugh Osborn, an exoplanet researcher based at the Laboratoire d’Astrophysique de Marseille, in France, who wasn’t involved in the new study. He does add a note of caution however. “I won’t be entirely convinced we’re seeing the remnants of a planet, instead of just an anomalously metal-rich star, until we know a bit more about what the average star in M67 looks like.”

Either way, the method of looking for and examining the signs of destroyed worlds in the light from stars could teach us more about distant planets in the future. “We will never know the elemental abundances of our own planet because we can’t dig down to the core,” explains Osborn. “But for an exoplanet that’s been ripped apart and strewn across its star, we can directly measure the average composition of material that was in the planet.”

The research is described in a preprint on arXiv.

The post Planet-eating star could be lurking in nearby cluster appeared first on Physics World.

Hypofractionation: a new value proposition in radiation oncology

Hypofractionated and ultrahypofractionated radiation therapy – increasing dose per fraction to enable significantly fewer overall treatments – promises to unlock significant wins for public and private healthcare providers as well as the radiation oncology teams at the patient end of cancer treatment. While the drivers for hypofractionated procedures such as stereotactic body radiotherapy (SBRT) have been clear for some time – improved patient experience, increased patient throughput and reduced cost of care – the challenge now is to identify new treatment tools and protocols to realize these clinical and economic outcomes at scale.

For starters, clinicians need the ability to maintain submillimetre accuracy and precision throughout treatment delivery – identifying the target location in the body; automatically detecting, tracking and correcting for target motion; and accurately repointing the beam in real-time to support the clinical use of smaller margins to reduce the side-effects of treatment. Between treatment fractions, radiation oncology teams also need tools to efficiently and seamlessly rework treatment plans to account for anatomical changes (see “Adaptive planning”, below). What’s more, none of this cutting-edge functionality can come at the expense of system versatility or patient throughput.

Put another way: the new standard in hypofractionated radiation therapy will be a treatment system that can deliver the highest level of accuracy and precision to both stationary and moving targets, along with the “workhorse versatility” to efficiently treat the full range of clinical indications.

Workhorse versatility

A case in point is Accuray’s Radixact Treatment Delivery System, a helical radiotherapy platform that employs a continuously rotating gantry and unique dynamic collimation system to enable highly conformal dose delivery to diverse tumour sites throughout the body. Radixact has now been upgraded to incorporate motion-tracking and correction algorithms (collectively known as Synchrony) from Accuray’s flagship CyberKnife Treatment Delivery System, a robotic radiotherapy platform widely deployed in treating a range of disease indications using stereotactic radiosurgery (SRS) and SBRT.

We’re not waiting for the target to move and then going there – we’re going to where the target will be proactively.

Andrea Cox, Accuray

This enhanced capability means that the Radixact System with Synchrony is now able to track and synchronize the delivery beam to the target position as the tumour moves. In effect, dose is delivered continuously to the moving tumour target – with the accuracy and precision required for hypofractionated radiotherapy (i.e. tight margins and steep dose gradients) as well as for standard radiotherapy procedures.

“We are the only vendor able to detect targets during treatment, track that motion whether it is regular or irregular, and correct for it in real-time during treatment delivery, with no need for inefficient pausing or gating,” notes Andrew DeLaO, senior director, marketing, at Accuray. “What’s more, target detection and tracking is possible using either fiducial markers or without fiducials, using the patient’s anatomy.”

For targets that move unpredictably – as a result of digestion or bladder-filling, for example – intrafraction imaging detects the motion so that the Radixact System with Synchrony can synchronize the treatment beam to the detected target position as it moves. For targets that move cyclically – as a result of the patient’s breathing – the system anticipates the target’s position using predictive motion-modelling algorithms and continuously synchronizes to that position in real-time based on images captured during each treatment session.

Software aside, it’s Radixact’s unique collimation system (comprising ultrafast multileaf collimators and dynamic jaws) that enables real-time motion correction of the treatment beam. “We’re not waiting for the target to move and then going there – we’re going to where the target will be proactively,” explains Andrea Cox, senior director, product strategy, at Accuray.

She continues: “What we’ve learned over the years [with CyberKnife] is that a patient’s breathing pattern changes from moment to moment as well as day to day – i.e. their breathing actually changes as they relax during the few minutes that it takes to deliver a course of treatment. To take account of this, the model created prior to treatment is always updated in real-time with new images acquired during treatment delivery.”

Ahead of the curve

Just last month, Accuray unveiled the first clinical customer for the Radixact System with Synchrony – the Froedtert and Medical College of Wisconsin Clinical Cancer Center at Froedtert Hospital in Milwaukee, Wisconsin. Folded into that announcement was news that the Froedtert and MCW radiation oncology team has already treated the first patient – a 45-year-old man with lung cancer – using the new-look system, with Synchrony tracking the lung tumour in real-time as it moved with the patient’s breathing while automatically adjusting the treatment beam to keep it targeted on the tumour.

“In our hospital network we’ve seen an increase in the use of hypofractionation – for example, SBRT – as part of the cancer treatment, making it critically important that we are able to safely deliver the correct amount of dose precisely to the tumour, even to those that move such as tumours in the thorax, abdomen and pelvis,” explains X Allen Li, MCW professor and chief of medical physics at Froedtert Hospital.

“With Synchrony we were able to deliver a reduced-margin treatment plan through all three fractions,” he adds. “Total treatment time for a fraction of 18 Gy was 16 minutes door-to-door, similar to our conventional radiotherapy procedures.”

Without motion synchronization, Li and his colleagues point out that a larger treatment field would have been needed to treat the entire path of the tumour movement. Additionally, the treatment time would have been longer as a result of on/off gating of the radiation beam to track the tumour moving in and out of the specified treatment window.

“Our comprehensive pretreatment tests and the initial patient treatment showed us how well the Synchrony technology works in the real-world clinical practice,” Li explains. “As a result, we now have an option for precisely and accurately delivering radiation to tumours as they move, which will expand the range of tumours we can confidently treat and the patients we can help.”

What patients want

For the near term, it’s evident that clinical adoption of hypofractionated radiotherapy is set to accelerate, with a top-down push from healthcare providers towards higher dose per fraction, fewer fractions, compressed treatment times, plus significant workflow efficiencies and lower cost of care.

In parallel, says Cox, there’s growing demand from patients for the benefits associated with hypofractionated treatment schedules – in essence, fewer clinical visits and a faster return to family and friends. “If you imagine a patient with a choice of going in for 30 conventional treatments over a six-week period or going in for five hypofractionated treatments in a one-week period – and for the same clinical outcome – there have to be some pretty compelling reasons for them not to choose the latter.”

Adaptive planning

Patients are complex systems in every sense. Between treatment sessions, they gain and lose weight; their stomach, bladder and bowel contents change; their organs may shift, rotate or deform; and their tumours may shrink, move or rotate.

The holy grail of online adaptive radiotherapy (ART) is not so far away

Andrew DeLaO, Accuray

Trouble is, traditional radiotherapy regimes rely on a single snapshot of the patient at the start of treatment, with most clinics limited in their ability to reimage patients and bound by rigid-body matching that does not account for any geometric deformations in patient anatomy. A plan attuned to the initial simulation can therefore become suboptimal as treatment progresses, rendering it unusable for hypofractionated or ultrahypofractionated radiotherapy.

“The holy grail is online adaptive radiotherapy (ART) and what we’ve done with the Radixact System with Synchrony is to take significant steps in that direction – a level of automation that allows the user to ‘set and forget’ to some degree,” says Andrew DeLaO, senior director, marketing, at Accuray.

He adds: “After every treatment, the Radixact System with Synchrony actually takes the dose that was delivered, deconstructs that dose and puts it back on the daily image so that you can see what that dose looks like versus the original plan. Offline planning tools automatically identify cases for review and possible plan adaptation using a red-yellow-green flag scheme.”

Equally significant is the use of automatic recontouring to accelerate plan adaptation, while maintaining the integrity of the original treatment plan versus tumour coverage, dose limits for organs-at-risk and overall toxicity.

“The online ART future is not so far away,” says DeLaO. “Where we’re heading is radiation oncology teams able to dynamically change the treatment plan in real-time during a treatment session while the patient is on the table.”

The post Hypofractionation: a new value proposition in radiation oncology appeared first on Physics World.

Proton therapy continues to show promise for children with cancer

The finite range of a proton beam confers high dose conformality to the tumour, while minimizing irradiation of non-target normal tissues. As such, proton therapy is proposed as a preferred irradiation technique for treating childhood cancers, particularly those affecting the radiosensitive developing central nervous system.

Two newly published research papers add to the growing body of evidence showing the potential benefits of proton therapy for paediatric patients. Both studies were led by Christine Hill-Kayser from the Perelman School of Medicine at the University of Pennsylvania and the Children’s Hospital of Philadelphia.

Improving survival outcomes

The first study focused on children with newly diagnosed medulloblastoma, a cancer at the base of the skull (Pediatr. Blood Cancer 10.1002/pbc.27972). Older children with this disease generally receive radiation to the entire brain and spine. This approach, however, can be highly toxic to the developing brains of children aged four or younger, who are typically treated with intense chemotherapy regimens instead. Unfortunately, these young children often relapse, with the highest risk of relapse in the posterior fossa where the tumour is primarily located.

The researchers evaluated 14 patients aged five and under who received proton therapy just to the tumour bed following surgery and chemotherapy. Four patients relapsed after treatment: three within the central nervous system outside of the posterior fossa and one within the tumour bed after subtotal resection. Across all patients, the five-year overall survival was 84% (48–96%) and the recurrence-free survival was 70% (38–88%). In nine children with available performance status follow‐up, the researchers saw no significant changes in their Lansky performance status.

While the study only examined a small cohort, the findings demonstrate great improvement over historical survival rates of 30–60% in very young patients who received intense chemotherapy without radiotherapy.

“Our study, while small, shows promising outcomes when we use proton therapy to target just the area of surgery in these cases as opposed to irradiating the whole brain and spinal areas,” explains first author Amardeep Grewal, chief resident in radiation oncology at Penn. The researchers suggest that this approach should be investigated further in young children with medulloblastoma.

Reducing risk of brainstem damage

In the second study, the research team evaluated the rate of brainstem necrosis in 166 children with central nervous system tumours treated with pencil-beam scanning (PBS) proton therapy (Acta Oncologica 10.1080/0284186X.2019.1659996). The median maximum brainstem dose in the treatment course was 55.4 Gy(RBE). In four patients who had received prior radiation, the cumulative median maximum brainstem dose was 98.0 Gy(RBE).

The researchers found that patients treated with PBS experienced significantly less brainstem toxicity than those treated with older techniques such as double-scattered proton therapy. One patient who had previously received twice-daily radiotherapy and chemotherapy experienced brainstem necrosis. At 24 months, the rate of patients experiencing brainstem tissue damage from PBS was 0.7%.

“The effect of proton therapy on the brainstem has been a subject of much debate, but our data show that pencil-beam scanning proton therapy does not increase the risk compared to conventional photon techniques,” says first author Jennifer Hyatt Vogel, who completed this work whilst a resident at Penn.

The authors say that these data warrant further study, especially in high-risk patients and those who have had prior radiation therapy. “Regardless of technique, expertise in proton therapy planning and strict adherence to safety constraints is essential, particularly in treatment of tumours near the brainstem,” adds Hill-Kayser.

The post Proton therapy continues to show promise for children with cancer appeared first on Physics World.

Nanocapsules deliver gene-editing payload

Researchers have developed a new non-viral nanocapsule to deliver a gene-editing payload into biological cells. The capsule, which is made of a biodegradable polymer, is a version of the CRISPR-Cas9 with guide RNA. The structure could help overcome some of the problems associated with viral vector delivery of gene editing tools.

sarah_gong
Shaoqin Sarah Gong. Courtesy: University of Wisconsin–Madison

CRISPR-Cas9 (clustered regularly interspaced short palindromic repeats) genome editing could potentially be used to treat many genetic diseases, including those currently without a cure. Most delivery technologies for CRISPR require viral vectors, however.

Although viral vectors are very efficient (viruses have, after all, billions of years of experience in invading cells), they can cause undesirable immune responses in the body. They also need to be altered to carry gene editing-machinery, rather than their own viral genes, into cells to alter their DNA (to correct a problem in the genetic code, for example, that causes a disease). This process, which needs to be adapted to each type of new cell target, can be time-consuming and complex.

In recent years, researchers have begun developing non-viral vectors, which are typically easier and cheaper to produce and scale up. Many of these are based on cationic liposomal components or polymers and can successfully encapsulate CRISPR-Cas9. They are beset with problems though, including the fact that they are relatively large (more than 100 nm across), can only accommodate a low gene-editing payload, are unstable and, most importantly, highly cytotoxic.

The Cas9/sgRNA ribonucleoprotein nanocapsule

A team led by Shaoqin (Sarah) Gong of the University of Wisconsin-Madison in the US has now developed the Cas9/sgRNA ribonucleoprotein (RNP) nanocapsule (NC) to address these challenges. The RNP NCs are very small (around 25 nm in size) and are very stable in the extracellular space, including the bloodstream, thanks to their covalent nature. They also have high RNP loading content, good biocompatibility and high editing efficiency.

“Unlike other previous RNP delivery nanosystems that typically contain multiple copies of the RNP, the RNP nanocapsule we report on normally contains just one RNP per nanoparticle,” explains Gong. “What is more, we can conveniently modify the surfaces of the RNP NCs with various targeted ligands, such as peptides, so that they can be used to target different organs/cells and treat different types of diseases.”

The RNP NCs are also relatively straightforward to make and they can be lyophilized (freeze-dried), which makes it easier to purify, store, transport and dose them, she adds. And last but not least, the Cas9 protein and sgRNA are present in a 1:1 molar ratio. The RNP only survives for a short time within the target cell, thus producing less off-target effects.

“This is important since editing the wrong tissue in the body after injecting gene therapies is of grave concern,” says team member Krishanu Saha, who co-chairs a steering committee for a consortium on gene editing in the US. “If reproductive organs are inadvertently edited, the patient would pass on the gene edits to their children and every subsequent generation.”

The researchers made their nanocapsules by enriching monomers with different charges and functionalities around the Cas9 RNP complex. They then polymerized the structure to form the nanocapsule. “As mentioned, this polymer coating is stable in the bloodstream/extracellular space, but it falls apart inside cells so that the RNP can edit the cell genome,” explains Gong.

krishanu_saha
Krishanu Sahu. Courtesy: University of Wisconsin–Madison

Gene editing experiments

She and her colleagues tested out their delivery capsules in gene editing experiments on murine retinal pigment epithelium (RPE) tissue and skeletal muscle. “We locally injected our nanocapsules into subretinal spaces or skeletal muscles. We found that the capsules efficiently delivered their gene-editing machinery and modified the appropriate target genes in the tissue in question. Furthermore, by functionalizing the surface of the nanocapsules, we were able to modulate the extent and efficiency of the gene-editing process.”

Xiaoyuan (Shawn) Chen, senior investigator at the National Institute of Biomedical Imaging and Bioengineering (NIBIB) at the US National Institutes of Health (NIH), who was not involved in this work, says that the new technique is a “cool” way of using relatively small sized nanocapsules for high efficiency loading. “The crosslinking the researchers employed makes the particles stable during the delivery phase but readily releases the payload inside the cytosol thanks to cleavage of the linkers by a molecule called glutathione. The imidazole groups present also allow efficient endosomal escape through a proton sponge effect.

“Although the current study has only attempted local delivery for RPE cells and skeletal muscle cells, the same principle may be used to deliver RNP targeting to other organs.”

Broadening the applications of CRISPR-Cas9 gene-editing technology

The Wisconsin-Madison team believes that its work will facilitate the development of safe and efficient delivery nanosystems for the CRISPR-Cas9 genome editing tools, for both in vitro and in vivo applications. “In particular, it will broaden the applications of this gene-editing technology and so help us better understand and treat various genetic diseases,” Gong tells Physics World.

“We now plan to apply this technique to deliver various CRISPR genome editing machineries to treat brain and eye diseases and are currently working with several clinical collaborators to this end.”

Full details of the current study are reported in Nature Nanotechnology 10.1038/s41565-019-0539-2. The researchers have also filed a patent on the nanoparticles they have made.

The post Nanocapsules deliver gene-editing payload appeared first on Physics World.

Study offers verdict for China’s efforts on coal emissions

Par : No Author

Researchers from China, France and the US have evaluated China’s success in stemming emissions from its coal-fired power plants (CPPs).

CPPs are one of the main contributors to air pollution in China, and their proliferation over the last 20 years has had significant impacts on air quality and public health.

These impacts led authorities to introduce measures to control emissions from CPPs and reduce their effects.

Writing today in Environmental Research Letters (ERL), researchers examined if these policies have been effective, and measured their benefits.

Dr Qiang Zhang, from Tsinghua University, China, is the study’s lead author. He said: “Between 2005 and 2015, the coal-fired power generation of CPPs in China grew by more than 97 percent. In 2010, CPPs’ sulphur dioxide, nitrogen oxide and fine particulate matter (PM2.5) emissions accounted for 33 per cent, 33 per cent and 6 percent of China’s total national emissions, respectively. The large amount of air pollutant emissions from CPPs causes fine particulate air pollution, which contributed 26 percent of the fine particulate nitrate and 22 percent of the fine particulate sulphate ambient concentration in 2012.

“To combat this, China introduced three primary policies for CPPs during 2005-2020. They aimed to improve efficiency energy by promoting large CPPs and decommissioning small plants during 2005-2020; brought in national emission cap requirements by installing of end-of-pipe control devices during 2005-2015; and introduced ultra-low emission standards between 2014-2020.”

To measure the effect these policies had on emissions, the team developed two retrospective emission scenarios based on a high-resolution coal-fired power plant database for China.

They also developed two emission prediction scenarios to forecast the CPPs’ emission changes associated with the implementation of ultra-low emission standards and power generation increments during 2015-2020.

Finally, they evaluated the air quality and health impacts associated with CPPs’ emission changes during 2005-2020, using a regional air quality model and the integrated exposure-response model.

Dr Fei Liu, from the Universities Space Research Association, Goddard Earth Sciences Technology and Research, USA, is the study’s corresponding author. She said: “Our results show that overall, China’s efforts on emission reductions, air quality improvement and human health protection from CPPS between 2005-2020 were effective.

“We found that the upgrading of end-pipe control facilities could reduce PM2.5 exposures by 7.9 μg/m3 and avoid 111,900 premature deaths annually. Meanwhile, the early retirement of small and low-efficiency units could reduce PM2.5 exposures by 2.1 μg/m3 and avoid 31,400 annual premature deaths.

“This suggests similar measures could be taken in countries such as India, to enable the reduction of emissions alongside rapid economic development.”

The post Study offers verdict for China’s efforts on coal emissions appeared first on Physics World.

Cosmic clash over Hubble constant shows no sign of abating

Par : No Author

A new way to measure absolute distances in the universe has allowed scientists to work out a new value for the Hubble constant, which tells us how quickly our local universe is expanding. The latest expansion rate is consistent with other direct measures obtained from relatively nearby space, but in conflict with others that rely on the universe-wide spatial features of primordial radiation. This disparity has become more pronounced in recent years and suggests that our current understanding of cosmic evolution may need an overhaul.

Evidence for the universe’s expansion emerged in the 1920s, when Edwin Hubble first observed that galaxies move away from us more quickly, the farther they are from Earth. Since then, there have been ongoing disputes about just how rapid the expansion is. While astrophysicists have measured the Hubble constant with increasing precision, a gap remains between the values of the constant obtained using two different types of observation. What is more, this discrepancy cannot be explained away by known sources of error.

Establishing the velocity of objects receding in space simply involves measuring the redshift of their emission spectra, but pinning down their distance from Earth is much more complicated. One approach is to create a “distance ladder” that starts from Earth and moves outwards in a series of steps. This usually involves calibrating the absolute brightness of far-flung supernovae by observing other supernovae in galaxies closer to us that also contain pulsating objects of known brightness called Cepheid variables. The first step, or “anchor”, is to measure the distance from Earth to nearby Cepheids.

The most precise distance ladder to date has been created by Adam Riess at the Space Telescope Science Institute in Baltimore, US, and colleagues. They used the Hubble Space Telescope to measure the distance to Cepheids lying in the Large Magellanic Cloud, some 150,000 light-years away. Their figure for the Hubble constant is 74.0±1.4 km s–1 Mpc1 is an improvement on earlier measurements of their own and on the 72±81.4 km s–1 Mpc–1 obtained by Wendy Freedman of the University of Chicago and colleagues in 2001.

Cosmic clash

However, those results clash with values based on how quickly the universe expanded shortly after the Big Bang. Measuring the length of temperature fluctuations within the cosmic microwave background (CMB) and then extrapolating forward using the standard cosmological model, researchers working with data from the European Space Agency’s Planck Satellite in 2016 reported an expansion rate of 66.9±0.6 km s–1 Mpc1. That value in turn is consistent with a figure of 67.8±1.3 km s–1 Mpc1, obtained using data from the Sloan Digital Sky Survey to measure a characteristic length scale between galaxies containing supernovae – with the length scale also set by the CMB (in this case via density fluctuations).

The latest work was carried out by Inh Jee, Sherry Suyu and Eiichiro Komatsu of the Max Planck Institute for Astrophysics in Garching, Germany, alongside colleagues in Germany, the Netherlands and the US. It provides an independent way of checking distance-ladder calculations using gravitational lenses. These are massive galaxies that can create multiple images of more distant luminous objects by bending the objects’ light rays gravitationally.

As a first step, Jee and colleagues recorded the time delay when detecting different images from a flickering quasar. Because the lens bends the light from each image along a different path through space, these delays reveal how massive the lensing galaxy is. Next, the researchers measured the velocity of stars within the lens to estimate the lens’s gravitational potential, which can be used to calculate the radius of the galaxy. By comparing this size with the apparent separation between different quasar images, they were able to work out the distance from Earth to the lens.

Robust and independent

Jee and colleagues used this approach to measure distances to two lensing galaxies. Using this information, they converted previously-measured relative distances to 740 type 1a supernovae into absolute distances. Reporting in Science, they calculate the Hubble constant to be 82.4±8.4 km s–1 Mpc1. “This is one of the first works to give a robust, independent distance to a gravitational lens,” says Suyu.

Although the figure is less precise than previous results, the researchers – working in a collaboration called H0LiCOW – have since gone on to combine their approach with an earlier type of lensing measurement based purely on timing delays. After analysing six lenses, the collaboration arrived at a Hubble constant of 73.3±1.8 km s–1 Mpc1, which is very similar to that of Riess’s group. The collaboration describes this result in a preprint on arXiv.

That figure, however, is at odds with a recent result Freedman and colleagues, who calibrated the distance to supernovae using the steady intrinsic brightness of heavy stars known as red giants. Using data from the Hubble telescope, they calculated that the Hubble constant should be 69.8±0.81.8 km s–1 Mpc1, which is midway between results from the two rival camps.

Radek Wojtak at the University of Copenhagen reckons that the tension between the two approaches means “we are getting closer to the stage” when changes to the standard model of cosmology should be considered. Such changes might include new forms of dark matter or dark energy, he says. But he cautions that researchers should continue to look for hidden errors. “The stakes are high,” he says. “We do not want to be fooled by poorly understood systematics.”

Princeton University’s Lyman Page agrees, arguing that it is still “too early to say with any certainty whether new physics is needed”. But he points out that until a few decades ago there was a 50% mismatch between different values of the Hubble constant. The current 5% discrepancy, he thinks, “is a mark of how precisely we know the universe”.

The post Cosmic clash over Hubble constant shows no sign of abating appeared first on Physics World.

Battle of the Elements finale, how to succeed in the space industry, a very hot superconductor

In this episode of our weekly podcast we talk about the remarkable career of Libby Jackson, who is human exploration manager at the UK Space Agency.

Sparks fly when carbon meets oxygen in the grand finale of the Battle of the Elements, which features a guest appearance by Chemistry World’s Patrick Walter.

The discussion heats-up when we ponder the significance of a material that could be a superconductor at temperatures well above the boiling point of water – and other hot physics news this week.

The post Battle of the Elements finale, how to succeed in the space industry, a very hot superconductor appeared first on Physics World.

Healthcare can worsen global climate crisis

Par : No Author

If the global healthcare sector were a country, it would be the fifth-largest greenhouse gas (GHG) emitter on the planet, according to a new report. Its authors, who argue for zero carbon emissions, say it is the first ever estimate of healthcare’s global climate footprint.

While fossil-fuel burning is responsible for more than half of the footprint, the report says there are several other causes, including the gases used to ensure that patients undergoing surgery feel no pain.

It is produced by Health Care Without Harm (HCWH), an international NGO seeking to change healthcare worldwide so that it reduces its environmental footprint and works for environmental health and justice globally. It was produced in collaboration with Arup.

The report says the European Union healthcare sector is the third largest emitter, accounting for 12% of the global healthcare climate footprint. More than half of healthcare’s worldwide emissions come from the top three emitters – the EU, the US and China. The report includes a breakdown for each EU member state.

An earlier report, published in May this year in the journal Environmental Research Letters, said the healthcare sectors of the 36 countries sampled were together responsible in 2014 for 1.6 GtCO2e (gigatonnes of carbon dioxide equivalent), or 4.4% of the total emissions from these nations, and 4.4% is the total used in the HCWH report.

(Carbon dioxide equivalency is a simplified way to put emissions of various greenhouse gases (GHGs) on a common footing by expressing them in terms of the amount of carbon dioxide that would have the same global warming effect, usually over a century.)

HCWH says well over half of healthcare’s global climate footprint comes from fossil-fuel combustion. But it identifies several other causes for concern as well. One is the range of gases used in anaesthesia to ensure patients remain unconscious during surgery.

These are powerful greenhouse gases. Commonly used anaesthetics include nitrous oxide, sometimes known as laughing gas, and three fluorinated gases: sevoflurane, isoflurane and desflurane. At present, the greater part of these gases enter the atmosphere after use.

Research by the UK National Health Service (NHS) Sustainable Development Unit shows the country’s anaesthetic gas footprint is 1.7%, most of it attributable to nitrous oxide use.

The UN climate change convention (UNFCCC) found that in 2014 a group of developed nations with 15% of the global population, 57% of the global GDP and 73% of global health expenditure was also responsible for 7 MtCO2e (megatonnes of carbon dioxide equivalent) of medical nitrous oxide use.

The UNFCCC concluded that the full impact of the gas’s global use in anaesthesia “can be expected to be substantially greater”.

Use is growing

For fluorinated gases used in anaesthesia, global emissions to the  atmosphere in 2014 were estimated to add 0.2% to the global health care footprint. Because of the growing use of these gases, increasingly chosen  in preference to nitrous oxide, the footprint from anaesthetic gases is also likely to increase.

In measured tones, HCWH says: “Wider adoption of waste anaesthetic capture systems has the potential to be a high impact healthcare-specific climate mitigation measure” – or in other words, trap them and dispose of them carefully before they can just escape through an open window to join the other GHGs already in the atmosphere.

But HCWH adds a warning: “For many individual health facilities and systems of hospitals the proportion of the contribution of both nitrous oxide and fluorinated anaesthetic gases to their climate footprint can be significantly higher.

“For instance, Albert Einstein Hospital in São Paulo, Brazil found that GHG emissions from nitrous oxide contributed to nearly 35% of their total reported GHG emissions in 2013.”

Its report said choosing to use desflurane instead of nitrous oxide meant a ten-fold increase in anaesthetic gas emissions.

Other remedies available

The HCWH report also sounds the alert about metered-dose inhalers (MDIs), devices which are typically used for the treatment of asthma and other respiratory conditions, and which use hydrofluorocarbons as propellants. These are also highly potent greenhouse gases, with warming potentials between 1480 and 2900 times that of carbon dioxide.

Again, though, the report says the full global emissions from MDIs will probably be much greater than today’s figure. Alternative ways of using MDIs, such as dry powder-based inhalers, it says, are available and provide the same medicines without the high global warming potential propellants.

The report argues for the transformation of the healthcare sector so that it meets the Paris Agreement goal of limiting temperature rise attributable to climate change to 1.5 °C.

HCWH says hospitals and health systems should follow the example of the thousands of hospitals already moving toward climate-smart healthcare via the Health Care Climate Challenge and other initiatives.

Welcoming the report, the director-general of the World Health Organization, Tedros Adhanom Ghebreyesus, said hospitals and other health sector facilities were a source of carbon emissions, contributing to climate change: “Places of healing should be leading the way, not contributing to the burden of disease.”

The post Healthcare can worsen global climate crisis appeared first on Physics World.

❌