Category Archives: Physics

Another argument for overhauling the Nobel Prize

The Nobel Prizes were announced last week, and the physics prize went to three scientists for “for theoretical discoveries of topological phase transitions and topological phases of matter.” (I admit that even as a physicist that I have only a vague idea what that actually means, so I’m not going into it.) The Nobels are the premier award for science, but the scientific context in which they were conceived is vastly different than how science is done in the modern world: Alfred Nobel’s will (which sets out the framework for the prizes) was drawn up in 1895, two years before J.J. Thomson discovered the electron. (He got a Nobel for this in 1906.) The science world is fundamentally different today than it was around the turn of the 20th century.

This is not to say that the turn of the 20th century was a dull time in science: on the contrary, science was exploding with activity, and whole new fields were opening up. In physics alone, x-rays, radioactivity, the electron, and black body radiation were all discovered within five years (1895-1900). But the model of the scientist at this time was still very much the solitary figure toiling in a lab, perhaps with assistants or a collaborator or two, but not with the highly collaborative lab system that is ubiquitous today. To borrow a phrase from historiography, the traditional model of science is very much a Great Man model, where individual men (and only men, even though there have always been women doing extraordinary scientific work with little to no support or acknowledgement) profoundly shape the scientific era through their work and influence. The Nobels, then, are modelled in this same fashion: no more than three winners can be named, it cannot be awarded to groups or posthumously, and the award must be given for a specific contribution to science.

This is no longer a viable framework by which to reward excellence in science. There have been criticisms levelled at the Nobels for decades, and certainly there are many pieces of extraordinary science that have been overlooked for the prize. Women and people of colour have been drastically underrepresented: only two women have won Nobels in physics (Marie Curie in 1903, Maria Goeppert-Mayer in 1963) and about 10% of physics laureates are people (men) of colour, although who is racialized has certainly not been constant throughout the history of the Nobels. No-one from Africa or South America has won the physics Nobel.

Lots of people have made these arguments before, and made specific note of people whose work has been overlooked (*cough*VeraRubin*cough*). These are all robust arguments and I feel I have little to add to those: the structure of the Nobels heavily favours Western men in prestigious facilities, and making the Nobels more inclusive requires a lot of work from a lot of levels of the scientific realm. However, even if the prize is awarded to a more diverse group of laureates in coming years, it still operates under the premise that extraordinary science is done by individuals rather than groups, and one look at the author list of any big paper from CERN or TRIUMF should tell you that that’s not the case.

Focusing on individuals rather than grounds means that work that is rewarded is somewhat misrepresented. While science in the early 1900’s was growing by the sorts of leaps and bounds that the Nobel was designed to reward, even the groundbreaking work of today is in some senses incremental. There were decades of work put in in understanding gravitational waves computationally, theoretically, and analytically, and all that work was necessary to both build LIGO and be able to intepret data from it. While the discovery of GW150914 was a singular shift in our ability to understand the universe, that discovery was the culmination of a mountain of scientific research and literally thousands of peoples’ contribution. Isolating only the final discovery from the context of preceding work makes no sense, and is a fundamentally inaccurate narrative to write about the scientific process.

It also, incidentally, reinforces the traditional model of Scientist as Devoted Monastic Scholar, where science is a calling only accessible to the most brilliant and devoted among us. This is nonsense: insisting that the Proper Way To Do Science a) exists in the singular b) is in isolation and drudgery and c) is imperatively all-consuming is a protocol for burnout. This model is regressive, extremely exclusionary, conducive to bad science and worse mental health, and a terrible yardstick by which to evaluate scientists and their work.

As well as failing to recognize the contributions of labs and collaborations, the focus on singular discoveries has lead to whole swaths of physics are un(der)represented in the Nobels. I’ve tallied up the fields listed with each Nobel physics laureate, and plotted the data below. For each prize awarded, multiple categories listed are considered individually, but if two or three scientists split the prize for the same work, each category is counted only once. Categories that were used to describe only one Nobel are applied electromagnetism, applied mechanics, applied optical physics, cosmic radiation, cosmology, critical phenomena, electron optics, electronics technology, fiber technology, interferometry, mechanics, metals, neutrino astrophysics, plasma physics, quantum optics, space physics, and theoretical physics; none of these are included in the plot for brevity.

Number of Nobels awarded for each subfield of physics.  Categories with only one Nobel listed are not included for brevity.

Number of Nobels awarded for each subfield of physics. Categories with only one Nobel listed are not included for brevity.

I’m surprised that there is so few awards given for work in the astrophysics / cosmological / gravitational physics, ie, work concerning the huge scales of the universe. Instead, the significant majority of the Nobels have been given for discoveries at the atomic level or smaller. The defining frameworks for the study of the universe at the largest scales (general relativity) and smallest scales (quantum mechanics) have been developed since the Nobels have been awarded, so both areas of physics have been rich with discoveries and breakthroughs throughout the past hundred years.

There’s also a dearth of Nobels given for research that falls in the range of scales where classical mechanics are sufficient: aside from a few prizes awarded for superfluidity, there have been no Nobels awarded for fluid mechanics. There’s nothing relating to environmental, planetary, geological, solar, or atmospheric physics, and nothing that can be considered interdisciplinary beyond the overlap with chemistry. Applied physics is almost entirely ignored, and the prizes listed as being experimental physics are largely also categorized as atomic or particle physics.

Here’s my theory about why this is the case: the Nobels are set up to reward single, defining discoveries, and the nature of small scale physics (especially particle physics) meshes better with that focus than many other fields. It’s straightforward to point to “we discovered a new particle” as a groundbreaking discovery, but it’s a little fuzzier to say “we figured out how to measure cosmological distances” and fuzzier yet to say “we understand the structure and circulation of the atmosphere.” There is a lot of work that leads to the discovery of a new particle, certainly, but one day there was no J/psi meson and then the next day there was. But describing the structure of the atmosphere was (and is) done in incremental pieces: there’s no clear single instance in time when the discovery happened. Understanding the structure of the atmosphere is extremely important, but it is difficult to point to a single prominent discovery or development that stands above the rest of the body of work.

(Also, the physics of the extremely small is tantalizing, since it is frequently weird and entirely inaccessible in everyday life. Since this is also the case for the physics of the extremely large, I have no satisfying explanation for why astrophysics and cosmology are so underrepresented.)

Ultimately, I think this is why we should overhaul the Nobels: clearcut discoveries typically involve hundreds or thousands of people, and individual people typically push the frontier of science by increments rather than by revolution. I have no knowledge whatsoever of Swedish law, and so I have no idea how legally entrenched the award criteria are; obviously it’s not nothing to overhaul a prominent international award rooted in a legal will. It’s quite possible that there is no legal way to adjust the number of possible laureates, and it’s quite possible that there is no legal way to permanently cease awarding the prize. But I think it’s time that those options are considered in depth by the Nobel Foundation, because the model of the Nobels is fundamentally incompatible with how scientific progress is made today. The most prestigious prize in science should reflect the collaboration and continuous progress woven into the ecology of the modern scientific world, and it should reflect the diversity of both scientists and scientific endeavours undertaken.

I am unconvinced that a single prize for all of physics makes much sense anymore, and there are critically important areas of physics that deserve recognition as well as the traditional fields. I believe that the prestige of the Nobels can be maintained while expanding the number of prizes awarded and increasing the diversity of work considered for recognition. Science is becoming more inclusive and collaborative, and though there is substantial work to be done at all levels of the scientific community, modernizing the Nobels is one way for the highest echelons of the scientific community to lead the way.

Science Borealis Carnival: National NMR Facility Faces Closure

To celebrate our one year anniversary, Science Borealis is having a blog carnival! While the theme is “The Most Important Science News in My Field in 2014,” I’m interpreting this somewhat loosely. I think the biggest on-going story in Canadian science is the sustained active cuts and passive underfunding of scientific research from the Harper government; however, this is by no means contained to this year, and to some extent, physics and astronomy has not borne the brunt of these cuts the way environmental science has. This is not to say that the state of Canadian physics, astronomy, and space science is uniformly rosy and healthy: lots of programs and institutions have weathered funding cuts and grant programs that have been allowed to lapse, and, notably, the Canadian Space Agency got a failing grade in Evidence for Democracy’s Can Scientists Speak? report. (Environment Canada, which has been one of the most visible sources of frustrated scientists unable to speak about their work, got a C-.)

The 21 T magnet at the National Ultrahigh Field NMR Facility for Solids

Possibly the most unassuming looking world-class physics lab in the nation. Source: http://nmr900.ca/instrument_e.html

But to the best of my knowledge, no physics or astronomy facility that can be described as “the only one of its kind in Canada” has yet had to shut its doors as a result of the war on science. (If you know of one, please let me know!) However, this may change early next year, as the National Ultrahigh-Field NMR Facility for Solids is in peril of closing permanently in March 2015. (The lab announced in late November that barring immediate reprieve it would be closing on December 1, but emergency funding was found, and the lab will remain open until March.) The NUF-NMR facility houses a 21 Tesla magnet, which is used to probe into the atomic structures of biological samples and novel materials. This magnetic is the strongest magnet in Canada, and the strongest magnet in the world dedicated to studying solids. All NMR work requires a strong magnetic to resolve the fine differences in nuclear emission spectra, but the stronger the magnetic the higher the resolution of the emission spectra, and the more elements that can be analyzed in the apparatus. Since this is the strongest magnet in the nation, if this lab closes there will be no facility in Canada that can analyze materials with magnesium, gallium, germanium, zirconium, indium, barium, or lanthium. Note that these are not all rare elements: it’s not just research into rare and exotic materials that would be curtailed by closing this lab.

The facility’s funding woes started in 2012, when scientific infrastructure funding was frozen for a year, and the NRC was overhauled to be a business-oriented lab for hire rather than a public research institute. The facility is housed in an NRC building, and received funding and support from the NRC before the restructuring. However, after the restructuring, the support was not renewed, and the funding the NRC had already committed ran out this year. The lease on the space from the NRC is $100,00 per year, and the directors estimate that another $160,000 is needed to cover operational costs.

While this sounds like a lot of money, this is not that much. The facility cost $11.8M to build, and for the want of $0.26 M, may close because all the grant programs they previously applied to (successfully, presumably) are now shuttered or restructured. Not that my research is comparable, but when I got my notification of resource allocation from Compute Canada last year, they included an estimate of how much my allocation would cost (were I paying it out of pocket, which I’m not, obviously). My modest allocation, for one grad student’s work, cost ~$75,000 per year. Obviously the funding sources are wildly different, but for the price of four modest supercomputer allocations, you could keep a unique Canadian facility open for another year. That is not even close to an outlandish sum of money for the substantial scientific payoff it provides.

This has been a theme of the war on science: while the budget cuts are presented in terms of efficiency and fiscal responsibility, many of the casualties have had modest budgets and outsized scientific impact. The fisheries library in New Brunswick that was shut (along with several others) comes to mind: the government spend several million dollars renovating an updating the library, and then closed it months later to save a few thousand dollars. I chalked that up to an ideological motivation, given the sustained hampering of environmental science work, but now it seems like there’s at least some haphazard slash-and-burning going on too.

I’m surprised that the NMR facility is facing such a funding crunch in part because this facility seems to be exactly the sort of lab that the government is supposedly trying to foster: NMR is used in a lot of applied and industrial science, especially materials science and biological physics. That they qualify for not a single grant program is baffling — surely with all this focus on funding industrial and applied science, there would be expanded funding programs for facilities that do that work? Much of the scientific community has said that this “refocussing on applied and industrial science” rhetoric is empty at best, and the NUF-NMR’s situation is good evidence that that’s not just dark or bitter speculation. None of the work listed on the facility’s research page has obvious political ramifications the way say the ELA’s publication list does. A lot of it sounds very useful, and much of it (particularly the pharmaceutical section) looks like it could easily be economically profitable. That a world-class facility like this is facing imminent closure, shuttering multiple research programs at universities across the country, is a clear indication that all science is under attack in Canada, not just the science with potential political ramifications.

Since the facility’s situation has come to light, the NRC has agreed to waive the lease temporarily (read: until this is safely out of the news), and from the sounds of the lab’s news page, there are negotiations in the works to secure some measure of stability. That’s good, but it’s only a matter of time until the next funding crisis comes around, and that’s likely to be sooner rather than later.

Robots in Spaaaaaaaace!

I realize that most of the hubbub about Curiosity, the newest Martian rover mission, was in August after the successful landing and subsequent initial instrument tests, but is it ever too late to talk about robots on other planets? No! Consider this my tiny endeavour to keep space science in the public’s eye, even when NASA’s not just landed a small car right side up on another planet with perfect precision.

Curiosity successfully landed on August 5th, and after a perfect landing, and all the instruments are in excellent working order. The instruments on board are all described on the NASA Jet Propulsion Lab’s site, with a blurb about how they work and what sort of things they’re looking for. For this post, I’m going to focus on the Canadian instrument on board.

The Alpha Particle X-ray Spectrometer, or APXS, is the only all-Canadian contribution to the mission. It’s designed by Ralf Gellert, a professor at the University of Guelph (previously known best for cows with windows in their stomachs and pigs with phosphate-reduced poop), built by Macdonald-Dettwiler Associates (who also built the Canadarm), funded by the Canadian Space Agency, and the research involves a group at Guelph as well as people at a few other Canadian universities. I’ve got some thoughts about what the APXS represents to Canadian science and science policy, but I’ll save that for a second post. I’ve heard press about various other instruments on board, including the laser spectrometer, the array of cameras, and the meteorological station, but not very much about how the APXS works and what it can do. So let’s talk about how it works!

The APXS is a pop-can sized instrument mounted on the robotic arm on Curiosity that uses two types of radiation to determine the elemental composition of rocks, soils, and dirt on Mars. Six radioactive Curium-244 sources covered with a thin titanium foil sit around an X-ray detector. Curium (element 96), like all elements heavier than uranium (element 92), radioactively decays by emitting an alpha particle, which is 2 protons and 2 neutrons bound together. When curium emits an alpha particle, it decays to plutonium-240, which then decays with a number of daughter atoms and x-rays. The x-rays from plutonium and alpha particle from curium are the radiation used to analyze a sample; the other by-products are typically of lower energy and are filtered out from the radiation beam by a foil over the sources. This radiation streams out of the sources in all directions, but to focus the beam, a metal ring surrounds the sources and the detectors to absorb radiation radiating at a wide angle.

Photo and schematic of the APXS

Left: Photo of the APXS. The detector itself sits in the middle of of the ring behind the sources. Photo from JPL. Right: A schematic of how the APXS works. The instrument is shown as if it were pointing at a table-top. Picture from the University of Guelph.

Two kinds of radiation are used so that a wide range of elements can be detected. The alpha particles are used for Particle Induced X-ray Emission, or PIXE, which is used to detect elements with a low atmoic number. The X-ray radiation is used in X-ray Fluorescence, or XRF, and detects elements with a higher atomic number. Combining the the two technique means that elements from sodium (atomic number 11) to zirconium (atomic number 40) can all be clearly detected in the combined spectrum.

In PIXE analysis, particles (alpha particles here, but other particles like protons can also be used) are beamed at a sample. The particles are emitted with a known, fixed energy and collide with the atoms in the sample, where some of the particles will interact with the inner shells of the electron structure. Electrons in an atom sit in a series of increasingly energetic levels, each with a fixed energy. If a particle hits the atom and interacts with an electron, it imparts energy to the electron. Since the electron can only have a fixed amount of energy at any given level, if it gains energy it must jump up to a higher level, which leaves a gap in the original level it started at. An electron from a higher energy level (not necessarily the one that originally jumped up to a higher level) then falls down to fill the gap in the lower energy shell, and emits an X-ray with the difference in energy between the two levels to conserve energy. This is clearer with a picture!

Schematic of PIXE process.

Alpha particle comes in, excites electron and scatters off, then an electron (either the one that was bumped up in energy or another one) fall to a lower energy and the atom emits an X-ray carrying the difference in energy between the starting and finishing energy levels of the second electron (ie, the electron that drops down in energy).

The process is dependent on the inner (ie lower energy) shells of electrons interacting with the incoming particle. The particles are most likely to interact with the outermost shell or two of the electron levels of an atom, so if an atom has several shells, there’s too many electrons shielding the lower energy electrons for many interactions to occur. PIXE is then most effective for atoms with a low atomic number.

XRF does much the same thing, but instead of using a particle to induce the electron’s transition, it uses X-rays. X-rays are useful because they can interact more easily with a inner-shell electrons in heavier elements than the massive alpha-particles. XRF is then more useful for detecting heavier elements that PIXE cannot, but because the efficiency of the process is low at low atomic number, PIXE (which is a more efficient process at low atomic number) is needed to gather a robust spectrum.

But doing PIXE and XRF on Mars poses problems: even the very thin Martian atmosphere will absorb low energy X-rays, so the lightest element that the APXS can detect is sodium (atomic number 11). Hydrogen, carbon, nitrogen, and oxygen (atomic numbers 1, 6, 7, and 8 respectively) are all invisible to the APXS, and oxygen in particular is abundant in the samples. Most of the heavier elements are present as oxides, so the mineralogy becomes very important in the elemental analysis. Plus the atmosphere varies, like any atmosphere, and the temperature fluctuates, and generally the conditions are not as pristine as they are in a controlled lab setting. This means that there’s more calibration work needed on Mars than in the lab, and also that the invisible elements need to be carefully treated.

So when a sample is being analyzed, Curiosity’s robotic arm extends (slowly and carefully, to avoid any collisions) out towards the sample rock. A pressure sensor on the front of the instrument trips when the instrument touches the rock, and the protective doors that cover the aperture of the instrument when it’s not in use open. The electronics are switched on, and the detector starts to gather a spectrum of X-rays emitted at various energies from the sample. The instrument is left in place for a between a few minutes and a few hours to gather data, which is stored and then beamed back to Earth in the daily data dump. When it’s finished collecting data, the arm moves back from the rock, the doors close, and the next sample is chosen.

The spectra that come back look something like this:

A typical spectrum from the APXS, taken from Spirit

A typical APXS spectrum. The horizontal axis is energy of the detected x-ray, and the vertical axis is the number of x-rays of a given energy detected. Image from JPL.

The horizontal axis is the energy of the detected X-ray, and the vertical axis is the number of X-rays detected at that energy. The spectra consist of peaks at specific energies, and the individual elements are identified by the energies of a series of characteristic electron transitions. The emitted x-ray of a given transition has a fixed energy, equal to the difference in energy between the two states of the electron. Since different elements have different atomic structures and different amounts of energy between the electron shells, the x-rays emitted by different elements undergoing either PIXE or XRF will have different energies. Working backwards, then, if a spectrum is collected and there is a peak at an energy corresponding to a transition of a given element, then that element is present in the sample. The precise composition of the sample is determined by calculating the relative area under each of the peaks.

The invisible elements complicate the analysis considerably, especially since there’s so much oxygen in the rocks. The mineralology of the rocks must be accounted for, and information from other instruments (including the cameras — visual identification is very helpful) is used to make initial estimates of the proportion of each element in its various oxidation states. For example, iron oxide may be ferrous (FeO) or ferric (Fe2O3), though to the spectrometer, it’s all just iron. So to do the analysis, knowledge about similar rocks on Earth is used to estimate the proportion of iron in each oxidation state. After all the expected oxygen from the bound minerals is accounted for, any “invisible element” left over may be bound water. Many minerals have water bound within the crystals, and Spirit and Opportunity found evidence of bound water in Martian rocks using this analysis.

Spirit and Opportunity were sent to Mars to find water in some form (and did!), while Curiosity was sent to examine whether or not at some point in its geological history, Mars could’ve sustained life. It’s important to note that it’s not looking for *life*, just evidence that conditions hospitable to life once occurred. Due to the raging success it’s had on Mars (the instrument on Opportunity is, to the best of my knowledge, still functioning, eight and a half years into a 90 day mission), other APXS’s are planned for missions to the Moon by the Indian and Russian space associations (the Chandrayaan-2 rover, landing hopefully in 2016) and to Comet 67P/Churyumov-Gerasimenko by the European Space agency (the Philae lander, landing in 2014).

More info:

  1. A summary of the APXS by the Canadian Space Agency
  2. One of the papers outlining the design of the first APXS used on Mars on the Pathfinder mission
  3. Major paper with results from the instrument on Spirit, one of the two Mars Exploration Rovers

Whether the Boson is Higgs or Higgs-like, It’s Still A New Fundamental Particle

The news out of CERN that a new, heavy, subatomic particle has been discovered by the ATLAS research group has the science-y part of the internet all a-twitter. It’s certainly not every day that new fundamental particles of nature are discovered, and to be 99.99995% certain that it’s an accurate conclusion is no small feat.

The Higgs boson is, in one sentence, a particle which is theorized to give other particles (like protons and electrons) mass. There are plenty of people who’ve done primers and more detailed explanations of what the Higg’s boson is, and for the sake of getting this up while everyone is still reading about all things Higgs, I’ll skip the drawings this time and point you elsewhere for the basic explanation.

What I do want to talk about are some of the significant results of such a significant result. The Large Hadron Collider was built essentially to find this particle, and while it’s not entirely clear that it is definitely a Higgs boson and not an exotic Higgs-like boson that we’ve not anticipated existed, something new has been found. Getting such a positive result underscores the worth of large-scale collaborations. Large-scale science is very difficult to get off the ground due to the sheer scale of resources necessary to built the devices. Things like particle colliders, gravitational wave detectors, space telescopes, even the shuttle program, fall under this category, and because there are so many resources poured into these programs, there’s extra pressure for them to succeed. It’s heartening when they do, because inevitably when big science programs that probe the edges of our knowledge of the universe come up, there’s people who bemoan the investment and say that the money would be better spent doing something practical.

Sure, we need money going towards practical things, but I agree with Neil deGrasse Tyson on this, and we need big, visible, exciting projects. We need things that excite our collective imagination to push innovation forward and give students and young researchers something to aspire to, and discoveries like the Higgs boson show both fill that need and show that the boundaries of science can be pushed. The knowledge that the Higgs boson (or something like it) exists may not make an appreciable difference in people’s everyday lives, but that moment of wonder is important. Without those “wow…!” moments, we don’t have a grand vision of scientific exploration, and without that vision, science stalls in the realm of what we know and understand to some degree, and never makes it much past the boundary between what we anticipate and the unexpected. How do we push the boundaries of knowledge without a grand vision? We don’t, and moments like today’s announcement are the culmination of grand vision backed by adequate funding.

It’s not well publicized, but there are often plenty of practical spinoffs of big-project science which filter into everyday peoples’ lives. The enormous magnetics that bend the particle beams in a circle at the LHC spawned new technology in high-speed rail in Europe. NASA’s space program has generated enormous amounts of technological innovation, from velcro to novel materials. This is setting aside the enormous amount of support staff that are hired to run and maintain facilities like CERN, and the obvious societal benefits of giving hordes of physicists something to tickle their brain with and keep them out of trouble and off the streets, both of which keep people gainfully employed and contributing to the economy. To say there is no practical reason to fund grand vision science is to be ignorant of what exactly grand vision science entails.

We haven’t had many collective “wow..!” moments in science lately, and there’s been a steady stream of funding cuts, regressive science policy, and wilful obfuscation of information by government agencies at the behest of the minister, and that’s just in Canada. There is Canadian involvement in the results — some researchers at TRIUMF are involved in the ATLAS collaboration — but even if there wasn’t, we could use a “wow…!” moment or five lately. Science in Canada is being ground away, and we need moments like this to inspire us to keep pushing the boundaries of knowledge.

The Flame Challenge Revisited

Since the winner of the Flame Challenge has been announced (congratulations to Ben Ames with his animated video!), I thought I’d, for the sake of some potential feedback, publish my humble entry and comment a bit on the challenge itself. For those of you who’re not sure what I’m talking about, the Flame Challenge was an initiative spearheaded by Alan Alda which sought an answer to a seemingly simple question: what is a flame?

A flaming matchstick.

What is this?

When Alda was 11, he’d asked this of his science teacher, and the teacher replied simply “it’s oxidation,” which is a thoroughly unsatisfying answer. In an effort to get people thinking about how to make science accessible to young people, Alda challenged the science community (and anyone else who was interested) to explain what a flame was in a way that an 11-year-old could understand it. Classes of 11-year-olds all over the US judged the entries, and the challenge will now become an annual initiative (though with a different question each year).

Here’s a few short paragraphs that I submitted.

Continue reading

No Faster-Than-Light Neutrinos Yet

At the end of September, a paper was published with the provocative conclusion that neutrinos had been measured to travel faster than the speed of light. It was big news, and was widely reported in the media as a reasonably established finding (when in reality it had been released on arXiv, an open-access physics portal, but was not yet peer reviewed, and was subject to much eyebrow raising from physicists at large). In the past week, news has come out that there were some systematic sources of error, and the results are not necessarily accurate.

Here’s a quick recap of what the original experiment entailed:

  • a beam of neutrinos was generated at the Large Hadron Collider (LHC), and the beam was aimed through the Alps (ie, underground) to a detector in Italy called OPERA about 730 km away
  • the signal leaving the LHC is timestamped, using a highly accurate (and very carefully calibrated) GPS system
  • as the neutrinos leave the LHC, a light signal is sent to the same location in Italy via a fibre optic cable. This provides a time-of-flight for a light signal to compare to the time-of-flight for the neutrino signal.
  • when the light and neutrino signals are detected in Italy, they are timestamped using the GPS system, and the times of flight cane be compared
  • this repeated over and over again to reduce statistical errors in the measurements
Schematic diagram for the experiment.

The scientists reported that there was a difference in the times-of-flight of about 60 ns, with the neutrino beam reaching the detector before the light signal. Now, the scientists involved have found two sources of error, which, indicentally, mirror my hunch from when this first hit the news. First, the GPS equipment is operating far outside of its normal operating range, and so my not behave exactly as expected; this error is thought to produce a faster neutrino speed. But the second source of error is a loose connection for the fibre optic cable that carries the light signal, introducing a delay that may account for the difference in time elapsed for the two signals (light and neutrino).

I’m not at all surprised at this — when the paper was first put out, I thought it was only a matter of time before a systematic source of error was found. General relativity has held up spectacularly in every experimental test undertaken, and it would take a lot to upend all of that.

I was surprised that the group released their paper as early as they did, and without initial peer review, and I think that speaks to the authority that the scientific community confers on the CERN collaboration. If this exact paper had been written be a group at a small, less renowned institution (assuming they had all the equipment to do the experiment), would it have ever seen the light of day? Would a smaller group release very controversial results which naturally invite a huge amount of attention from popular media, without even subjecting them to peer review first? Would they ask for scrutiny from all and sundry, not just their peers?

Continue reading

Electric Fields

In the last post, I glossed over the bit about how lightning has a hard time passing through air, so I thought I’d clarify (and hopefully this’ll be clear enough that I don’t need to keep up with this string of addenda and clarifications and can write about something new).

From the last post:

The net difference in electrical potential builds up, until the neutral air and water vapour in between the positive and negative regions can no longer sustain the difference, and a lightning bolt discharges the electrical energy. Air is a very good electrical insulator (ie, it is difficult for an electrical current to pass through the air), so a very large electric field can be sustained in the cloud before a lightning bolt discharges the stored energy, and returns at least part of the cloud to a neutral electrical state.

So what exactly is an electric field? It’s a region where, if a charged particle is placed, it will experience an electric force. It’s just like a magentic field: when a magnet (for example, a compass) is placed in a magnetic field (like the Earth’s planetary magnetic field), it experiences a force that aligns it (ie, the compass needle) in a particular way. Similarly, a charged dropped into an electric field will experience a force that pushes it in the field. Electric fields are created by a distribution of charges, either discrete or continuous:

A point charge and a lump of continuous charge, both with electric field lines.

The green lines represent the electrical field.

Of course, the force experienced by a charge dropped into a field depends on the sign (positive or negative) of the charge. A negative charge will experience the opposite force that a positive force experiences, ie, the arrow heads all point in the other direction.

With lightning, it’s not a point test charge dropped into the cloud that creates the bolt, but rather that the charge distribution itself cannot be sustained any longer, and a bolt transfers charge from one region of the cloud to another and neutralizes the field.

Heavily charged cloud with two lightning bolts.

The bolt travels through air, and air is not a vacuum, so the physical properties of the air (or any material that charge is attempting to move through) will affect how easily the charge can move through the material. Materials (and by materials I mean any state of matter, so it can include say glass, water, and air) can generally be classified as either insulators or conductors, depending on a property called conductivity. Electrical energy has a hard time travelling through insulators (like glass) which have a low conductivity, while it passes easily through conductors (like metals), which have high conductivity.

Continue reading