How I live-tweeted my own conference talk

Recently, I wrote about my strategy for preparing scientific talks and promised to follow on that with an example I was preparing at the time. The whole thing took longer to prepare than I had planned, even though the talk itself went smoothly (including the live tweeting!).
Let me start by telling you a story:
Superconducting systems are among the best candidates for quantum computers. Light is ideal for quantum communication owing to low losses and noise. Mechanical oscillators can mediate coupling between microwaves and light. Better performance can be achieved by optimizing for a specific task. We can also use new designs to improve efficiency and bandwidth. Highly efficient transduction is possible using adiabatic state transfer. Transducer bandwidth can be increased by increasing the array size. Reflection from a large number of cavities introduces a phase shift. High conversion efficiency is possible in presence of losses and noise. We must also limit backscattering. Transducer array is an interesting platform for frequency conversion. Generalizations of the system are possible.
Granted, it’s not a compelling read. The sentences form a logical sequence but I simply dropped them down one after the other. There are no links between them, no therefores and becauses. Just one fact after another.
But this story is not supposed to be perfect—it’s the core of my talk. These are the topic sentences I used, one per slide (except for one slide where I had two sentences). This is the story someone who doesn’t really pay attention will see. Those who listen will hear the proper version—all the links between ideas are only for those who listen.
This story goes roughly like this:
Good afternoon, my name is Ondrej Cernotik and I work at the Leibniz University in Hannover. I am going to talk about Novel approaches to optomechanical transduction. The main motivation for this research is the following: superconducting circuits—operating at microwave frequencies—are one of the best platforms for quantum computing. Quantum communication, on the other hand, is best done with light. So, if we want to build quantum networks of superconducting quantum computers, we need a link between microwaves and light.
Such an interface can be provided by a mechanical oscillator that interacts with light via radiation pressure and with a microwave circuit by electrostatic forces. The system can look like this: …
What I say is assisted by the slides that I show. I start with a title slide that shows all the relevant information—title, my name, and affiliation. I continue with a slide showing a picture of a superconducting circuit for quantum computing, then an experiment with quantum communication, followed by a scheme for an optomechanical transducer. Later, I show basic mathematical description of the systems I talk about (the most important rule here is to keep the maths simple!) and simple plots that show how the systems behave.
slides
I can cover the middle ground—not just the basic facts and not the nitty-gritty details—on Twitter. The topic sentences I use are short so it’s no problem to fit more information in a tweet. I can upload figures to make the tweets more appealing or provide additional information. After all, my audience on Twitter is different from the audience at the conference and might need more background.
tweets
And how did I tweet while presenting at a conference? I didn’t actually tweet while talking, of course. There are many tools that enable one to schedule tweet (I used buffer), which get posted at a specified time in the future. That’s where good planning of my talk came in (apart from sticking to the allocated time window, obviously). If I know how much time I’ll spend on each part, I know when I have to tweet.
Originally, I decided to live tweet my talk only to illustrate how identifying the key messages of the talk can work (and to prove to myself that I can do it). But now I think it wasn’t a bad idea and might try it again in the future. I won’t tweet every single slide but only the most critical information. Because making a talk Twitter-friendly is not merely an interesting exercise and a way to make sure that the main message is clear. It’s also a great opportunity to share my work with those who cannot attend or who might enjoy a less technical version of my presentation.
If you’re going to give a talk soon, do give live-tweeting a try! And let me know how it went!
If you want to know more about the details, I storified my tweets and uploaded my presentation to SlideShare. Feel free to check them out!

Scientific presentations and the art of storytelling

How many talks have you attended in the last year? And how many of those did you enjoy? Even when the topic itself is interesting, one often leaves disappointed. Some speakers spend too much time on technical details and do not have time to discuss the main results; others are not well prepared and keep jumping backwards to remind us of something they mentioned ten minutes ago. It’s almost as if they didn’t consider sharing their work important enough to warrant careful preparation.
No matter the situation, I always have to prepare; it’s not entirely voluntary. Because of my stutter, I have to make sure that I know what I’ll say—less stress means that my speech will flow better. These preparations cost me a lot of time that many would spend working on something else. But a thorough preparation often beats my handicap, and my presentation can be above average at a conference.
In the course of my PhD, I developed an effective workflow for preparing academic talks. It takes time (a lot of time, sometimes). It’s not the only method to prepare. And possibly not even the best one. But it works for me. And the main idea is simple enough that anyone else can adapt my approach in no time.
It all boils down to a single piece of advice: You are telling a story, and your job is to make sure it’s a good one. That’s all. If you know what story you want to tell and how you want to tell it, you’re prepared.

The what

Figuring out your story is the most critical part. There are many ways how you can frame your research in the context of existing work. Maybe you started studying social behaviour of dolphins because you love the animals. Or you think that we can learn from them and improve our own relationships. Or you want to understand the variations between dolphin species.
You will also have to choose the right story for your audience. A group of academics might not care for your love of dolphins but it can be a good way to connect with a class of middleschoolers. Different academic audiences will want to hear different talks as well.
Finally, your story will depend on the time allocated for your talk. If you have to give a short talk, you can’t delve into all the fascinating details of your work; you have to pick the most interesting and important results.

The how

Once you figured out the story you want to tell, you can think about how to tell it. This is, of course, entirely up to you, but there’s one rule you should follow if you’re preparing a powerpoint/keynote presentation: Each slide should be one step forward. Not more, not less. If you jump forward too fast, you’ll lose your audience; if you’re too slow, you’ll bore them.
To ensure that I stick to this rule, I design each slide around a short sentence. This way, it is easy for the audience to understand what the main message is. The rest of the slide supports this statement—by illustrations, equations, or graphs.
Once my presentation is ready, I make a simple check that I created a good story: I copy all sentences into an empty document and make sure that they build on each other. Each sentence is a logical step in a sequence leading to the grand finale. If this is not the case, I can see where the problem is and rewrite the spot accordingly. In the end, each slide is a concise, independent idea and together they form my story arc. Each slide is so simple that I could tweet it if I want.
The rest is practice, practice, practice. I make sure I know how to transition from one slide to the next without awkward pauses. This also gives me an idea about how much time I need. I identify places where I can speed up if need be and create a few checkpoints throughout the presentation—I write down how much time I need to get to particular slides. When presenting, I can see whether I’m following my schedule and adjust my tempo as I present.

Example

I will show you my approach on an actual presentation I will be giving in a few days. This week, the annual spring meeting of the German Physical Society is taking place in Mainz. On the last day of the conference, Friday, March 10 at 15:15 CET (9:15 am EST) I am giving a presentation about my current research.
Since not everyone can attend my talk, I will tweet it live. For me, it will be an interesting experiment. For everyone else, it’s an opportunity to follow my presentation; you will also see how I can fit each slide into a tweet. Later (probably next week) I will write a second blog post in this miniseries looking into this talk in more detail and comparing it with the live tweeted version.

Changes

Building a new habit is hard. I saw that with blogging twice already. I started and stopped and started and stopped. Now, I am starting a third time and hoping that my blogging routine will stick.

Insanity is doing the same thing over and over and expecting different results. Does that mean I’m insane? Not at all. I am not doing the same thing over and over, and not only because I learned from mistakes past. I’m not starting the same blog this time.

My blogging will take a new turn. Though writing about physics is fine (and I plan to continue that), I want to start writing about other matters as well. Because academic life is not just the research. And even if it were, my life isn’t just academia. And various things affect my scholarly experience.

All these issues are important. Some people don’t fully understand what the academic life is like; and I want to show the range of scholarly activities to non-academics. Some things we do wrong in academia; these need to be identified, discussed, and addressed. Some aspects of the everyday non-academic life affect our academic experience, or vice versa; and we need to talk about those issues. Research has its emotional side that we don’t talk about often enough; it’s time we changed that.

I don’t know how my blog will change. I might not write as regularly as I used to (which is still an improvement from not writing at all). I might write more, or less. I might experiment with form or content. I might lose readers or gain new.

I don’t know how my blog will change. And I can’t wait to find out.

Benefits and challenges of tweeting a conference

Academic conferences are usually exhausting. You spend the whole day (or, more often, several days) closed in a lecture room, often without direct sunlight or fresh air, and try to absorb as much information as you can from (sometimes poorly prepared) talks of your fellow researchers. At some events, speakers change as often as every 15 minutes which makes it even harder to keep track of their talks. At larger conferences, stress from running between parallel sessions to catch all interesting talks adds to the mix. Nobody in their right mind would voluntarily add one more task on top of that — tweeting what others are talking about, right?

It might seem that live tweeting at a conference only adds more stress and work to one’s already packed program. Yet, it helps me pay more attention to what the speaker is saying and to identify the main message of the talk. As a result, I can learn better and enjoy the conference more.

This shouldn’t come as a surprise to anyone with a solid tweeting experience. After all, Twitter forces you to express your thoughts as concisely as possible. For tweeting from a conference, that does not mean you should dumb down what you hear; instead, you have to pay close attention to what the key information is. You have to strip the information off all unimportant details (which might be crucial for the science but not necessarily for your audience). And that is also what you need in order to make good notes for yourself and remember what you heard.

Secondly, you have to adapt to your audience’s background. Most of your followers might not know why a particular research project is important and how it fits within the research that has already been done. One is thus forced to think about these questions as well. You might think that you already know the answers but you might easily find links between seemingly unrelated problems. And considering a known issue from a new angle (which you might do to help your audience understand its implications) can bring new and interesting insights.

Finally, it also helps me to concentrate better if I know that I am the only person who can share the conference with my followers. If I zone out for a minute. I will not know what the speaker said. As a result, I will also not be able to pass this knowledge on. Through accountability, I thus tend to pay more attention than I would if I decided not to tweet.

While there are certain advantages to tweeting a conference, the practice is not so simple. Tweeting the talks you are attending is great but you have to remember that it also takes your attention away from the talk. The more you tweet, the less attention you will pay to the speaker. And if you start reading the tweets of others, you can miss the talk completely.

My solution to this problem is trying to find balance. Keeping the number of tweets within a reasonable limit, I do not overwhelm my followers and have time to focus on my learning experience as well. In my tweets, I explain what problem the speaker is trying to solve, why that is important, and how it can be done. If there is some interesting information on top of that, I’ll share it as well. If the talk is short (20 minutes or less), I might even tweet less. And not getting distracted by Twitter? That is a question of self-control, and nothing more.

Even if you manage to keep things brief, not get distracted by Twitter, and not tweet too much, you can spend a lot of time crafting and perfecting your tweets. At first, your tweet is five characters too long, then a piece of (maybe crucial) information is missing, now the tweet sounds a bit clumsy. Before you know it, the speaker has moved on and you missed the one slide that was necessary to understand the rest of the talk.

You have to keep in mind that live tweeting is different from your usual tweeting. Most of the time, you have plenty of time to create the perfect tweet, but not at a conference. Here, you have to get the tweet out as fast as you can (but not at the price of grammatical errors or incomprehensibility, naturally). Your followers will understand that your tweets can’t be as polished as they usually are.

No matter what you do, you should enjoy your conference, learn new things, and talk to interesting people. If you find that Twitter doesn’t help you achieve that, let it be and tweet less; or not at all. What works for me doesn’t necessarily have to work for you.

Is the Moon in the sky when you’re not looking?

If you find quantum physics hard to understand (or accept), rest assured that you are not alone. Even many physicists (including Albert Einstein, one of its founding fathers) refused to acknowledge that our world can behave so strangely. That atoms or electrons can be at two places at once or that it does not always make sense to talk about properties of particles before they are measured.

Physicists are also only people. When a new theory challenges our worldview, we start to look for mistakes in said theory and not in our assumptions about the universe. Many physical theories had to fight against ingrained beliefs — heliocentric model of the Solar system, Einstein’s special and general relativity, or quantum mechanics are just a few examples. Eventually, the theory prevails, at least until it is replaced by a new, more precise one.

In quantum physics, the assumptions of locality and realism are challenged. The locality assumption — which comes from special relativity — tells us that all information can travel through the universe only at the speed of light and not faster. When we assume realism, we assume that the outcome of a measurement exists already before the measurement is performed. In other words, if I look at the thermometer to find out how warm the weather is today, the temperature is not decided the moment I look at the thermometer; the air has this temperature independent of me looking at the thermometer.

Let’s illustrate that on an example. Suppose I have two coins with a very peculiar link between them. Every time I flip these coins, I get opposite outcomes. If the first coin shows heads, the other one shows tails and vice versa. I never know which coin will give which outcome but whenever I look at one coin, I immediately know what the outcome of the toss of the second coin will be.

So far so good. Now, I will take one of the coins and fly to the Moon while leaving you with the second coin here on Earth. If we now flip our coins at the same time, I immediately know what the outcome of your toss is, and you know the outcome of mine.

What does that have to do with local realism? Since we are now about 400 000 km apart, it takes any information about 1.3 seconds to travel between us. My coin thus cannot know what the outcome of your toss is; similarly, your coin knows nothing about my toss. That is what the locality assumption tells us.

How do the coins know what side up to end to always give opposite outcomes? That’s where the realism comes in. In this situation, it tells us that the result of the measurement has existed before the toss and both coins therefore know what outcome the toss is supposed to give.

This is how anyone should expect two such coins to behave. But if the coins obey the laws of quantum mechanics, things are different. We cannot say that the outcome of the toss exists before we actually toss them. (This is actually a matter of interpretation of quantum physics — it is generally assumed that the measurement outcome is decided the moment the measurement is performed.) That’s why some physicists claimed quantum physics must be incomplete — there must be some underlying theory that explains what outcome every single coin flip will give. And such a theory must be local and realistic.1

What should we believe? Local realism or quantum physics? It turns out there is a simple test for that. Suppose that instead of a pair of coins we have two such pairs and I take one coin of each pair to the Moon and keep the other two coins here with you. If we now both flip coins from the same pair, we will always get opposite outcomes. But if we flip coins from different pairs, any combination of outcomes is possible.

While the experiment with a single pair of coins was largely a matter of interpretation, with two pairs of coins do local realism and quantum physics predict different results. All we have to do is toss the coins many times, each of us deciding randomly which of the two coins to toss each round and then writing down which coin we tossed and what the outcome was. Then, we can compare our data and see which of the two theories is right.2

Although the test of local realism is, in principle, rather simple, it is not easy to build an experiment that can confidently decide whether local realism is true or not. There are two main challenges that need to be solved: The first problem is to make sure that the two systems are spatially separated. Here, it is important that the time difference between the measurements is so small that no communication between the two sites at the speed of light is possible. Since the distance over which quantum systems can be reliably transmitted is strongly limited, there are strict requirements on the synchronisation and speed of the experiment.

The second main problem is an efficient measurement. Most experimental tests of local realism are done with single photons but it is extremely difficult to detect those. The efficiency is so low that often detectors do not notice when a photon arrives. This opens a loophole — the measurement does not grant us access to the whole statistics but only to its part. And we cannot be sure that the statistics of the sub-ensemble is the same as that of the whole ensemble.

It took over 30 years to build an experiment (more precisely, three experiments; one with electron spins and two more with photons) that really confidently refute local realism and show that quantum mechanics has to be taken seriously. One of our basic assumptions about this world thus has to be wrong; some signals are able to travel faster than light or it does not make sense to talk about objects we are not currently observing.


1 The experiment with two coins and the conclusion in this paragraph are a simplified version of the famous EPR paradox. It was formulated by Albert Einstein, Boris Podolsky, and Nathan Rosen in 1935 to show the problems of the Copenhagen interpretation of quantum mechanics.

2 Such an experiment — not with coins but with electron spins — was first proposed by John Stewart Bell in 1964. He showed that a particular correlation between the spins is bounded by the value of 2 for local realistic theories, whereas quantum mechanics allows to have stronger correlations, with values exceeding 2.

 

What I learned (and didn’t) from a year of blogging

It has been a year since I started blogging. It did not go quite as well as I hoped it would but also not as badly as I was afraid it might. I started full of determination with a clear plan, wrote posts… and then stopped. It took me seven months to start again and since then, I have been writing regularly.

This is a good time to stop, take a deep breath, and look back. Analyse (I am a theoretical physicist, remember?) what I did well and what could have been better. Who knows, other bloggers (whether just starting or more experienced) might find this useful.

  1. Regularity
    It is easier to keep momentum than gain it; it is easier to lose momentum than keep it. It takes no effort to decide to write the next post later. When I momentarily have too much work to do, it seems reasonable to skip writing the next blog post. But if that happens, it becomes more difficult to write it. If I make it my priority to publish a post every week, I will. It is not always easy but it can be done.
  2. Planning and serendipity
    While it is a good idea to have a plan, one should never be too strict about sticking to it. Reacting to current affairs (if they are related to the topic of the blog) is a good way to reach new audiences. And being open to other impulses can inspire upcoming posts.
  3. Learning
    Keeping a blog about science is a constant learning process. I do not write about things that are completely new and unknown to me, of course, but I do need to make sure that everything is factually correct. What’s more, I need to make sure that the topic is understandable to non-physicists. For that, I have to consider several ways to look at a particular problem and pick the one (the ones) that is (or are) the easiest to comprehend. And I can always learn something new from that!
  4. Time
    It takes a lot of time to write a blog post. Writing a thousand words can be done fairly quickly; finding those words is a different matter entirely. A completely new blog post starts with a topic and an outline. I can think about those while doing other things (such as commuting to and from work) but they still need time. Then comes the draft, editing and proofreading. After that, I might need to prepare pictures and only then is a new post ready to be published. Without proper planning, it is impossible to get the next post out on time.
  5. Failure
    Sometimes, blog posts don’t turn out the way I was hoping. Maybe I didn’t have enough time for writing or I chose a difficult topic to write about. That happens. I can’t expect every post (or any post) to be perfect; some are better, some are worse. If I don’t want to write bad blog posts, the best strategy is to not write at all — and that’s not an option. As long as I can figure out what I did wrong and learn from it, everything is good.

Those are things I learned so far. But there are also things I am still struggling with and need to improve:

  1. Organisation
    It happens to me sometimes that I outline a blog post in my head and, before I write the post, I forget how I wanted to structure the argument. Then, I have to try and remember what I wanted to write or, in the worst case, start again from scratch. One way or the other, it costs me time. I need to learn to write these ideas down before they can flee. Or even better, make outlining part of the process of writing a draft, experiment with the outline and choose one that works the best.
  2. Finding time to write
    As I said above, writing a blog post takes time which is sometimes hard to find. There is a way out of this problem (at least partially):  Using any narrow time windows during the day to write. I just have to remember the next time I have few minutes free to take my notebook out (yes, I draft my blog posts by hand) and start writing.
  3. Writing ahead
    So far, I start writing the next post after I published the previous one. Does that sound reasonable? It isn’t, really. It means that I have exactly one week to write the next post. If I had several posts ready, I could occasionally take a little longer to write the next post — or even take a break for a week. Having a buffer is something I can start right away; all I need to do is be a little more strict about writing for the next few weeks and I will surely manage more than a post per week.

These are my experiences with blogging so far. If you also blog, what do you (or did you) struggle with? What helped you solve your problems?

The end is nigh. Well, not really

It is beginning. Earlier this week, I downloaded Scrivener and yesterday, I started outlining my dissertation. I still have a lot of time to finish — I am currently planning to submit early next year and defend in spring, though that might change — but I think it is a good idea to start now. Why, you ask?

Screenshot 2016-02-10 15.42.23
The dreaded blank page.

First of all, this is the longest, most complex piece of writing I ever set out to write. Sure, I had to write a bachelor and a master thesis but those two combined are probably shorter than my dissertation will be. It will therefore take more time to write. And it is better to start early and have plenty of time for edits than be chased by deadlines.

But more importantly — and perhaps paradoxically — I am starting to write now because I am not done researching yet. Just recently, I finished a project and am about to start a new one. What will I do? I DON’T KNOW. And that is exactly my point. This is the right moment for me to stop working on my own projects and publications and look systematically and in detail at the work of others. Then, I can better judge which open questions I can tackle. And it is only natural to write what I learn and turn it into the introductory parts of my dissertation.

Connected to both these reasons to start so early is a third one: Because not all my work is done and because the writing will be so complex, I need not only to write what I intend to but, first of all, figure out what it is that I want to write. And for that, I need to keep track of all thoughts and ideas that come to me and organise all the material I plan to use. This is a task that goes beyond what standard LaTeX editors (I used to date) can do. Therefore, I need to find a platform that can take care of that and become sufficiently familiar with it.

So far, Scrivener seems to be a good choice to do that. Not only can I use it to work on my draft but it also helps me to keep any notes and further materials at the same place as the dissertation draft. I have to look into it in more detail to find out how to best use all these features and that will need quite some time. But if all goes as smoothly as it seems it will, the writing itself will then be relatively easy.

Since I started so recently, I did not manage more than briefly outline the first half of my dissertation. And yet, it already helped me realise how much I still do not know about the basics that will form the foundations of my dissertation. If that happened with a deadline looming above me, it would mean a serious complication to my plans. Now, I can simply go and read on the stuff I still need to learn.

Quite naturally, this approach also has its disadvantages. How am I supposed to write the introductory chapters presenting the knowledge I am building upon if I still do not know what I will do during the last year of my PhD? My choice of the next project is simply constrained by that. This situation is not that much different from what I would experience anyway — my next project should, in some way, be related to my previous one. I might then need to rearrange the introductory material a little but it should not need any complicated redrafting.

In the end, this approach will probably save me a lot of binge writing. By the time my last research project is done, I will have, ideally, written most of my dissertation already; it will only remain to write about my last project and make sure the whole text is coherent. Finishing my dissertation will then be just a matter of a few weeks and not several months.

And now, if you’ll excuse me, I have some writing to do…

Good scientists publish, shitty ones blog. Or do they?

As scientists, we are in a very privileged position compared to the rest of population. Not only do we really enjoy what we do but we also get to choose what to work on ourselves. Sure, there is the dark world of academic bureaucracy and the perpetual fight for grant money but I still think that we are an extremely lucky bunch. I am not aware of any other profession where the situation is similar.

Now and then, we forget how truly exceptional our situation is and take this privilege too much for granted. Then, we try to forget about the outside world and, hidden in our ivory towers, fight against every change in the academic environment. Some times, we feel offended by accusations of sexism. Other times, we find it outrageous that we should move science to social media and to the public.

I am sure there are bad scientists who vent their frustrations by criticising the works of others on the internet. But there is also a large group of researchers who do not forget about the world outside the academic milieu and want to share the amazing science they do with others — it can be fellow researchers who do not work in exactly the same field of study, family and friends who never stop asking about one’s work, or anyone willing to listen. We then start our blogs where we talk about our own research, the work done by our fellow scientists, our approaches to tackling problems we face at work, and the joy our daily lives bring.

Maybe we will not publish as many papers as those who do not see beyond their ivory towers because we are not so focused on our research output. But there are many ways in which scientists can contribute to the community; publishing own results, reviewing works done by others, mentoring and teaching younger generations, or sharing our passion for science with the rest of the world are just a few. It is, of course, impossible to judge who is the best scientist but, as long as we all contribute in a positive way, that does not (or should not) play a role. At the end of the day, science is not a solitary endeavour but a benefit to the society.

We also must not forget that it is the public who lets us work on problems we find fascinating. The least we can do in return is tell them what we did and how it will benefit them; otherwise, we might wake up one day and find them not willing to finance our work any more. Sure, it is not always immediately clear why our results are so important or how they can be applied to benefit mankind but hiding our work from the lay public is not a solution. Even such abstract fields as theoretical mathematics can be made accessible to those willing to learn something new.

Long-term commitment to disseminate research to the public is not an easy one. But without it, it is difficult to get society to trust science and we cannot expect the public to listen to us when presenting important findings. For instance, if we want to convince public that climate change is a real threat to our civilisation, we have to explain how we found that out and what the findings mean for our near future. If we ask the public to trust us blindly, all we can expect is skepticism and denial.

I am not implying that every scientist has to blog. As I said above, there are many ways in which researchers can contribute to the community, and blogging is just one of them. If someone finds it difficult or thinks they can contribute better in other ways, that is perfectly fine. But damning every science blogger and claiming they are all failures is a very short-sighted approach.

How well can we measure position?

It is a well-known fact in quantum physics that the position and momentum of an object (e.g., a single atom or a vibrating mirror) cannot be known with an arbitrary precision. The more we know about the position of a mirror, the less we know about how fast it is moving and vice versa. This fact — sometimes misattributed to the Heisenberg uncertainty principle1 — has far-reaching consequences for the field of optomechanics, including the efforts to detect gravitational waves.

Simple position measurement. A beam of light is reflected off of a mirror and the phase the light acquires during its transmission can be used to determine the precise position of the mirror.
Simple position measurement. A beam of light is reflected off of a mirror and the phase the light acquires during its transmission can be used to find the precise position of the mirror.

Imagine one of the most basic tasks in optomechanics: using light to find the position of a mechanical oscillator. The simplest scenario assumes that you just bounce light off of the oscillator; its position determines the phase shift the light gets. Measuring this shift, you can infer the position of the mirror.

If the measurement is very precise, the momentum of the oscillator will become very blurry. This means that although we know where the mirror is now, we cannot predict how it will move in the future because we do not know how fast it is moving. If we attempt a second position measurement after the first one, its result will be very imprecise.2 The uncertainty relation connecting position and momentum thus gets translated into an uncertainty relation between positions at two different times.

There is a simple explanation for this behaviour: the change in the statistics (i.e., in the variances of position and momentum) is due to the interaction with light. Precise measurement of the light’s phase results in our precise knowledge of the mirrors position. Correspondingly, there must be something in the light pulse that disturbs the momentum.

Indeed, there is a part of the interaction that is responsible for this reduced knowledge of the momentum. The light that interacts with the mirror kicks it — similar to a person jumping on a trampoline pushes the trampoline downwards. That in itself would be perfectly fine if we knew how strongly the mirror gets kicked but there is no way for us to know. There is another uncertainty relation at play (this time it is a true Heisenberg uncertainty relation), namely between the amplitude (or intensity) and phase of the light beam.

At first, it might seem that overcoming this backaction of the measurement on the mirror can be done by using a light pulse with a specific number of photons. If we know exactly how many photons kicked the mirror, we can (at least in principle) determine how strong this kick was. The mirror thus gets a well specified kick and the momentum uncertainty does not grow. The problem with this approach is that such states of light have random phase and we do not learn anything about the mirror’s position from the measurement. If, however, we use a state with a precisely specified phase, its amplitude is completely random and we cannot know anything about the size of the momentum kick.

In any practical setting, scientists do not have such precise control over the state they can use to probe the mirror. In most cases, they will use a coherent state — the state of light you get out of a laser, characteristic by having equal uncertainty in amplitude and phase. The overall amplitude of the pulse is the only thing that can be controlled. Using a very weak pulse does not give a very good measurement because the signal from the mirror is weak as well. While the precision improves with growing power, the backaction grows too because the uncertainty in amplitude increases. When the intensity is neither too large nor too small, the joint error of successive measurements is minimised. When this is the case, the measurement reached the standard quantum limit.

Schematic depiction of a speed meter. The double reflection off the mirror (with a little time delay between them) can be used to determine the speed with which the mirror moves; moreover, the second reflection counteracts the momentum kick due to the first one.
Schematic depiction of a speed meter. The double reflection off the mirror (with a little time delay between them) can be used to find the speed with which the mirror moves; moreover, the second reflection counteracts the momentum kick due to the first one.

Is this the best measurement of mechanical motion we can do? As the name suggests, there is another, non-standard limit on quantum measurements. The problem with the current setting is that we are trying to measure two incompatible observables (mirror positions at two different times). If we try to measure the velocity of the mirror instead, this problem does not arise. Velocity is the momentum divided by the mass of the mirror; its knowledge tells us how the mirror will move in the future. This is in stark contrast to a position measurement where a better knowledge leads to a less precise prediction of the future movements of the mirror.

Another option is to disregard the fast, periodic oscillations of the mirror. Since we know that the mirror is oscillating at a particular frequency, we can work in a reference frame where the oscillations do not play a role. The mirror is then almost motionless while the rest of the universe is now oscillating around us. The slowly changing position of the mirror in this frame can be measured precisely since it is not affected by the momentum uncertainty. The momentum in this rotating frame, of course, becomes more blurry as the precision is increased but it does not influence future positions of the mirror.3

Both these approaches to measurement of mechanical motion are more complicated than a simple position measurement. But since various tasks require very precise measurements — often more precise than the standard quantum limit allows — there is a lot of scientific activity around these alternative strategies. Who knows, they might even find their way to real-world applications in a few years time.


1Strictly speaking, the Heisenberg uncertainty relation concerns the property of a quantum state that we prepare. Here, on the other hand, we are asking how our position measurement affects the momentum which is not the same. As we will se, however, there is a close connection between the two.

2This is true in the statistical sense — we assume that we do many such pairs of measurements and look at the variance of the second one.

3If you find it strange that momentum does not affect future position, it is because we are not talking about position and momentum in the standard sense. Their usual relationship therefore does not hold.

 

Seeing ripples in spacetime

One hundred years after Albert Einstein shared it with the world, the general relativity is waiting for its last confirmation: direct observation of gravitational waves. These ripples in the curvature of spacetime are created when a massive object accelerates. Typical examples of such systems are binary neutron stars or black holes; as the two stars (or black holes) orbit each other, they gradually lose energy which gets emitted in the form of gravitational waves until, eventually, they collide.1

How large are these waves? That depends on how far their sources are but scientists generally assume that the relative size of waves we can expect to see can be about 10-20. This means that, due to a passing gravitational wave, a one-metre long rod will expand and shrink by 0.000 000 000 000 000 000 01 metre, which is a hundred thousand times smaller than a proton. In other words, if a proton were the size of a football field, a gravitational wave would be as small as a grain of sand. Said proton is, at the same time, just a grain of sand compared to a football-field sized atom; if atoms where as large as grains of sand, a single human hair would have one kilometre in diameter.

Michelson
Scheme of a Michelson interferometer. Light from a laser (left) is split on a beam splitter (middle) and travels through two arms of the interferometer (top and right); after reflection, the light recombines on the beam splitter. If both arms are equally long (top scheme), all light is reflected back to the laser. If the lengths of the two arms differ (bottom scheme), part of the light will escape through the bottom port where it can be detected.

To detect something that small, scientists build large interferometers. If both arms of such an interferometer are exactly the same length, all light that we send in will come out through the same port it came in. But if the length of the arms differs slightly (for example, because of a gravitational wave), part of the light will leak through the other port where it can be detected.

The interferometers have to be large because it is then easier to detect small changes in the arm length. With metre-long arms, the small change in length that has to be detected is 10-20 m. The interferometers of the LIGO detector are each four kilometres long so their length changes by about 10-17 m. For the eLISA interferometer (which will be sent to space to detect gravitational waves from there), arms long one million kilometres are planned; their length will change by 10-11 m which is just ten times smaller than the size of atoms.

The precision needed in LIGO still cannot be achieved with a simple interferometer. The solution is to place a set of mirrors into the arms so the light bounces back and forth many times. If light travels million times between the mirrors, it is as if the arms where 4 million km long and the required measurement precision is similar to eLISA. Further improvement can be achieved by adding another mirror into the input of the interferometer. Light leaving through this port then returns back into the interferometer and the intensity in the interferometer grows. And the more light is circulating through the interferometer, the more will leak through the output port where it can be detected. Final improvement is achieved by placing another mirror into the output that we are trying to measure.

The effective length of the arms can be extended by letting the light travel many times through them (top). The sensitivity can be further improved by adding mirrors into the input and output ports (bottom).
The effective length of the arms can be extended by letting the light travel many times through them (top). The sensitivity can be further improved by adding mirrors into the input and output ports (bottom).

Unfortunately, gravitational waves are not the only thing that can cause such small shifts in arm length. Any sort of vibrations can distort the measurement. Therefore, gravitational-wave detectors are built in remote locations where there is little or no human activity. Still, the occasional lorry driving by or even a person stomping near one of the mirrors will disturb the measurement. Other noise comes from various technical imperfections in the interferometer, such as fluctuations in laser frequency and intensity, presence of residual gas in the arms (which are supposed to be in vacuum), or heating of the mirrors due to light absorption. Finally, there is the quantum noise which ultimately limits the precision when all other imperfections are eliminated.

The hunt for gravitational waves will not be over once they are detected and Einstein’s theory confirmed. Once we are able to detect them with sufficient precision in a large frequency window (ranging from fractions of hertz to tens of kilohertz), we can use them to learn more about the universe. They can, for instance, tell us more about black holes than electromagnetic radiation can. Cosmic inflation is another source of gravitational waves which could tell us more about the universe shortly after the Big Bang. With a successful detection of gravitational waves, we will open a new window into the universe.


1 The loss of energy by such objects, in perfect agreement with Einstein’s predictions, has already been observed and was awarded the Nobel Prize in physics 1993. Scientists are therefore confident that such waves do exist.