Connecting superconducting quantum computers with light

Entanglement is a peculiar feature of quantum systems that makes them behave as if they were sitting directly next to each other even if they are kilometres away. Such behaviour does not occur in classical physics. Classical particles can affect each other through fields — such as the gravitational or electromagnetic field — but these fields propagate with the speed of light; the interaction between entangled particles, however, is instantaneous. Simply as if the distance between them did not exist.

Classical particles do not affect each other when they are far apart. In quantum physics, entangled particles do influence one another; it is as if they were always close to each other.
Classical particles do not affect each other when they are far apart. In quantum physics, entangled particles do influence one another; it is as if they were always close to each other.

Entanglement started to interest scientists in the early days of quantum physics (it was first mentioned by Erwin Schrödinger in 1935) but only in the 1990s, it was realised that entanglement can also be used as a resource for quantum communication. After some 20 years of intensive research, not much has been achieved in creating entanglement (over long distances as is needed for quantum communication) in the laboratory. The main problem is that the only system that we can send over such distances is light; if a system does not interact with light, it cannot be entangled with another system that is sitting far away.

One particular example of such a system is a superconducting circuit. These typically work at energies corresponding to microwave fields and microwaves cannot be transmitted as easily as light (at least in the quantum regime; the world is too hot for them and the signal they carry does not survive). But superconducting systems seem to be very well suited for quantum computing. And having a (quantum) computer which cannot communicate with other computers over (quantum) internet… well, what’s the point?

A superconducting system with a bus resonator (black line) and two quantum bits (in red squares). © Schoelkopf lab, Yale University
A superconducting system with a bus resonator (black line) and two quantum bits (in red squares). © Schoelkopf lab, Yale University.

Naturally, scientists started looking for a way to connect superconducting circuits and light using a third system that can interact with both. There are several candidates for such an interface — sort of a quantum network card, if you like — and one of the most promising options is to use a mechanical oscillator for the task. Those can be relatively easily manufactures, well controlled, and they can strongly interact with both light and superconducting circuits.

A scheme for entangling superconducting qubits that uses their interaction with mechanical oscillators and optomechanical measurement.
A scheme for entangling superconducting qubits that uses their interaction with mechanical oscillators and optomechanical measurement.

What can we do with all that? Let us start with the simplest possible task — entangling two quantum bits formed by superconducting circuits1 and connected by light. There are many ways this can be done; we will use an approach where the light is used to measure the two qubits. A well-chosen measurement which reveals some joint property of the qubits can result in an entangled state; furthermore, it has the advantage that the right measurement outcome signals that the entangled state has been successfully created. (This is also a reminder of the importance measurements have in quantum physics that I wrote about before.)

Suppose we now start by preparing the qubits in such a state that each qubit has values 0 and 1 at the same time. Now, we let each qubit interact with a mechanical oscillator and the oscillators interact with a beam of light that we measure. If we build the system the right way, the measurement of light will tell us how many qubits have the value 1. It can happen that both or none have this value, which is uninteresting. But if exactly one of the qubits has the value 1 (the other, naturally, has the value 0), they are entangled because we cannot tell which qubit has which value. No matter how far apart they are, if we now measure one of the qubits to be in the state 0, the other will immediately end up having the value 1 and vice versa.

There is are many things one can do once the qubits are entangled — transfer quantum states using quantum teleportation, send encrypted messages using quantum key distribution, or try to confirm quantum mechanics by violating Bell’s inequality, for example.

Ultimately, people are interested in creating entanglement in more complicated systems; with superconducting circuits, it would be interesting to have many qubits entangled. There is one very practical reason for that: Superconducting quantum computers need to work at very low temperatures (only about 0.01 °C above absolute zero) and it is very difficult to cool things to such low temperatures. As a result, only small things can be successfully cooled. Future superconducting quantum computers therefore cannot be very large; to have one large, powerful quantum computer, it is then necessary to connect several such computers using entanglement. Then, the many small computers will behave as if they all were in the same large fridge, forming parts of a large quantum computer.

This post aims to summarise the main results of a paper I wrote with my PhD advisor on the topic of generating entanglement of superconducting qubits using optomechanical systems. A free preprint can be found at arXiv.



1 Superconducting qubits work similar as classical bits in a computer — there is a current running through a circuit and the value of this current determines the value of the bit. The only principal difference is that a classical bit has a value of either 0 or 1 whereas a quantum bit can also be in their superposition, having values 0 and 1 simultaneously.

 

Through the looking glass

Studying physics ultimately changes the way one sees the world. This is probably true for any subject but with physics, this change goes deeper than with biology or history. One starts to see some very basic things very differently. At least that is what I think.

Take the simple act of measurement, for example. You want to know what the weather is like? You check the thermometer. Want to know whether you lost weight? You step on the scale. In any case, the act of measurement is just a way of obtaining some information that already exists. There was a particular temperature outside before you looked and it does not depend on whether you look or not.

Then you start learning quantum physics and your understanding of measurements dramatically changes. You want to measure a position of a particle? Sure, you can do that. But unlike in normal world, it does not make much sense to talk about the position before you measure. The particle was not here or there before you measured, there was only certain probability for it to be there.

Classical particles (left) are simply little balls but quantum particles (right) are just a cloud of probability.
Classical particles (left) are simply little balls but quantum particles (right) are just a cloud of probability.

At first, this might seem similar to your everyday experience — when you are looking for lost keys, you do not know where they are so there is only a certain probability to find them at a particular place. But there is an important difference; although you are unaware of the exact position of your keys, they are lying at a particular place. A quantum particle, however, is literally at several places at once. Only by measuring its position you localise it at a particular place. It is as if you are looking at the thermometer changed the temperature outside.

When the position of a quantum particle is measured, its cloud of probability is squashed, representing the gain in information about its position we get.
When the position of a quantum particle is measured, its cloud of probability is squashed, representing the gain in information about its position we get.

Since the particle was not at a particular position before the measurement but it is at a specific position afterwards, the measurement changes the behaviour of the particle. If you now let the particle move freely (i.e., without observing it), it will behave differently than if you did not look at it in the first place. To use the analogy with measuring temperature, it is as if the weather during a day depended on whether you looked at the thermometer in the morning.

Since the measurement affects the state of the particle, its evolution is depends on whether a measurement was performed or not.
Since the measurement affects the state of the particle, its evolution is depends on whether a measurement was performed or not.

If that is still not enough for you, you can go deeper and ask how the measurement process works. First of all, you will find that people know surprisingly little about that. They will tell you that the system you are measuring (such as the particle whose position you want to measure) interacts with a second, meter system in such a way that some variable of the meter contains information about the measured system and can give a strong, classical measurement signal. But what determines whether a system is classical and can be used to measure other systems or whether it is quantum and can be measured by other systems? Not a clue.1

Even with this little knowledge about measurements, people can describe what is going  on surprisingly well. Because the system and the meter have to interact for some time, a lot can happen during the measurement. If you try to measure the position of a particle, the particle will continue to move while you are measuring. Measure too quickly and you will not know where exactly the particle is because you do not collect a strong enough signal. Measure too long and the particle will move too much during your measurement.

In the end, you can never measure as precisely as you would like. There will always be a small uncertainty in the position of your particle. And this gets even weirder when you try to look at the position later again. Quite surprisingly, the better you know the position at an early time,the more blurred the measurement will be at a later time. This is the result of Heisenberg uncertainty relation between position and momentum but that is a story for another time.2

Another thing you can do is measure really slowly so you need a long interaction time between your system and the meter. At any given time, you do not have a complete information about the state of your system, i.e., you never know exactly where your particle is, all you can have is a guess. The measurement then becomes an inherent part of the evolution of your system and can be used to steer it. There is now certain randomness in the evolution (remember, all we can talk about before the measurement are only probabilities of each outcome and the measurement is thus random at heart) but that does not matter that much since you know what the random measurement outcome is.

You can imagine a simple feedback loop as a sequence of a measurement, a feedback force, and a free evolution. This way, you can, e.g., stabilise the position of a particle.
You can imagine a simple feedback loop as a sequence of a measurement, a feedback force, and a free evolution. The measurement outcome is random but the feedback ensures that the particle stays frozen at a fixed position.

If you do not like this randomness, you can use the information you get from the measurement to control your system. You can, for instance, use the result of the position measurement to keep a particle pinned to a particular position. Every time it tries to move a bit (and everything moves a lot in the quantum world), your measurement will tell you so and you can push it back. We thus came to the notion of measurement feedback I already talked about before.

Realisations that such simple things as measurements have such rich and complex internal structure are one of the things I love about physics. Where most people see a simple (and a little boring) way to get some information, I see an incredibly complex process people still don’t understand after studying it for decades. More than that, measurements are for me a tool that we can use to control and manipulate quantum systems. And there is nothing boring about that!


1 I am, of course, simplifying things a bit here. There is a lot that we know about measurements (and a lot we don’t!) but it all involves a lot of counterintuitive things and complicated maths. There is no room for the details in a blog.

2 Here, I am mixing the notion of single-shot measurements (i.e., measurements you only do once) and repeated measurements (which can be used to obtain statistics). But since even a single-shot measurements takes up a finite time, it is, in a way, a statistical matter. I will try to get to this problem in a later post.

A new start

I am at the point in my PhD where I am truly becoming a researcher and am no longer just a student. How can I tell?

I just finished a project I worked on basically since I started my studies more than two years ago. It was my supervisor’s idea to study this particular problem, even though some of my ideas also helped shape the result, especially in the last half-year.

Now, I have to find something new to work on. Yes, I have to. It is no longer up to my supervisor to do that for me. I will now dig through the literature, see what has been done, and try to find a blank spot in knowledge I could fill.

This is not a task I could have easily done when I started my PhD. I could have gone through the literature back then, of course, but it would have been much more difficult for me to identify a problem that is worth solving and that can be solved by a graduate student. But after two years of cracking problems, reading research papers, and generally being immersed in the academic world, my view is very different from when I started. I know better what I can achieve, what problems are worth solving, and what means I should use to tackle them.

This discovery is, of course, rather encouraging. It means I can see the progress I have made since starting my PhD. Not only expressed in the number of publications that appeared on my CV but also in the less tangible ways — I am more independent than before, I can orient myself in the body of research, I can understand what others are working on, why they are interested in this particular problem, and how they go on about solving it. Still, every new situation is scary — at least up to some extent — and this one is no exception.

Doing a PhD is a lot like climbing a mountain. As you start, all you see is the large pile of rock you have to climb and nothing else. The path is long and tiring and never leads to the top in a straight line. And once you are on the top and a view opens, you see everything around you. Suddenly, you are aware that there are many more mountains around you could climb. And some — maybe even most — are higher than the one you climbed to. So now you can decide which mountain to climb next. But you must choose carefully. You have to find a mountain that is not too hard to climb which can be difficult to judge from a single look from afar.

IMG_0539

The situation is the same with my next research project. I can see what I have done — that is the mountain I just climbed –, what others have achieved, and what has not been done yet, i.e., the mountains I can see around me. Now, I have to find a problem that has not been addressed before but is interesting, important, and relatively easy. How can I judge that? Especially since I never did that before?

Naturally, I do not have answers to these questions. But I also know that I do not necessarily need them. For start, my supervisor would not let me go looking for my next project if he did not believe I can find one, and that is an encouraging thought. I also do not need to go and find the next problem all by myself. As I progress through the works of others, I can discuss with my supervisor and colleagues what I found and what I think about it. My goal can thus develop over time and others can help me make sure that I stay on the right track. Finally, I know what my first little steps in this direction will be. Consequently, there is no vast sea of unknown research waiting to be explored but several smaller, manageable pieces.

The joys of theoretical physics

Have you always thought mathematics is dull and complicated? You are certainly not alone. But there is a lot of beauty hidden in it and in the way it describes our world.

Theoretical physics is all about using maths to describe nature. As the universe we live in is vast and filled with myriads of phenomena — starting with the universe itself expanding due to dark energy, galaxies held together thanks to dark matter, new stars being born and dying, planets and asteroids orbiting these start and colliding with one another; through processes happening on and inside those planets including the miracle of life; down to the perplexing world of molecules, atoms, and subatomic particles — so the mathematical language in which these processes are described uses a lot of tools, often very complex. And yet, there is a surprising level of similarity between different systems.

For a theoretical physicist, there is no difference between an oscillating pendulum, a vibrating string, and a propagating beam of light. Heat transfer and particle diffusion are equivalent because they obey the same mathematical law. According to quantum field theorists, every type of particle (be it a proton, an electron, or a photon) can be seen as a harmonic oscillator and there is almost no qualitative difference between them.

Some physicists are trying to take this idea one step further and find a single physical theory encompassing all physics as we know it. Thus, the Grand Unified Theory was developed which unifies three of the four fundamental interactions in nature — electromagnetic, weak, and strong. Including the fourth one — gravitational — is a feat that has not yet been achieved. Some even doubt that such a Theory of Everything will ever be formulated.

Many theoretical physicists (like me, for example) do not pursue such noble quests but focus on smaller, albeit not less meaningful tasks: How does X work? Can it be used for something worthwhile? What is the best way to do it? These are not important questions on the global scale (compared to questions such as ‘How did the universe come to be?’) but the more important for technological progress. Such development is ultimately the domain of experimental physicists and engineers but finding ways of using new bits of physics in ways humanity can benefit from is a part of theoretical physicist’s work.

Such a process can be illustrated on a problem that is occupying many a scientific mind: building the quantum computer.  It will, of course, be experimental physicists who will build the first functioning prototype (assuming we ever develop one) but theoretical physicists examine how such a device should be built. Should we use atoms as the information carriers? Photons? Something more exotic? Those are some of the questions quantum information theorists are trying to answer.

There is a lot of mathematical beauty in solving such tasks, too. After all, theoretical physicist sees a quantum computer as a large register of quantum bits on which an arbitrary operation can be performed, which can be stored for a long time in a quantum memory, and which can be sent to another quantum computer via a quantum internet channel. The need for investigating various platforms comes from the experimental realisation — each potential platform has its own unique advantages and disadvantages that have to be carefully weighed when finding the optimal architecture for a successful quantum computer.

All that said, there are many more surprises hidden in quantum information theory. One often finds other, unexpected connections between the weirdest parts of the theory. And finding them is always one of the biggest delights working with theoretical physics can bring.

All this mathematical beauty can, actually, also be useful. If two different systems behave in a similar way, we can use one to simulate dynamics of the other. This is more and more often used in quantum physics where complicated systems (especially those that cannot be observed directly in a laboratory) can be simulated using much simpler systems. This way, we can relatively easily learn a lot about the elaborate system which cannot be simulated on classical computers. (Due to their nature, it is possible to simulate only small quantum systems on classical computers.)

The field of quantum simulations (i.e., using simple quantum systems to simulate evolution of more complicated systems) is still in its infancy. But it will probably not take long before we can simulate systems that too difficult to solve for regular computers. We can then expect better understanding of many physical and chemical processes such as high-temperature superconductivity, quantum phase transitions, dynamics of chemical reactions, or photosynthesis. And all that thanks to the incredibly rich and intriguing structure of the mathematical language we use to describe our world.

How to measure time

Precise timekeeping is crucial for many of our daily activities. High-speed communication (on the internet or in a mobile phone network), satellite navigation, and many other tasks require time synchronisation over long distances to work properly. But how is time measured? And can quantum physics help reach better accuracies?

The basic idea behind measuring time is simple and similar to any other measurement — you simply compare the time duration with a reference. The reference has to be an event that regularly repeats itself so that a single repetition is the basic time unit and the number of repetitions gives the overall time. An example of such  process can be the Sun rising and setting every day. Its regular movement in the sky defines one of the most fundamental units of time: a day. It is a very simple and natural way to measure time but it has one disadvantage — it is long. If we want to measure shorter times, we need a better reference — a process that repeats itself faster than once a day.

For this, we can use a pendulum and let it swing. Its movement will be periodic and counting the number of swings, we can measure time. Since the period of the oscillations depends on the length of the pendulum, we can even tune it and choose how fast our reference should be. If the length of the pendulum is about 25 centimetres, its period will be 1 second. (Clocks normally use pendulums that are about 99.4 centimetres long resulting in a period of two seconds, or a half-period of one second.) Using a system of gears, the periods can be counted and transformed into movements of hands that then show time on a clock.

There is one problem with using pendulum, though. The exact period of the swinging depends on the local gravitational field which varies on different places on Earth, depending on their latitude and altitude. A pendulum that oscillates with 1 second frequency on the Equator will have a period of 997 milliseconds on the North Pole. That might not seem like such a big difference but in a single day, the North Pole clock will be faster by more than four minutes! Clearly, if we want a more precise time measurement, we need something that oscillates even faster.

Quartz crystal for a wristwatch or a clock.
Quartz crystal for a wristwatch or a clock. (Public domain, source.)

The most commonly used oscillator in today’s clocks and watches is a quartz crystal. It can be made very small and due to its mechanical properties, it can vibrate at much larger frequencies. Typically, the crystal vibrates more than 32 thousand times in a single second and is therefore much more precise than pendulum clocks. The accuracy is improved from about 15 seconds per day to half a second per day — an improvement by a factor of 30. (The improvement is not larger because quartz clocks — especially wristwatches — suffer from many technical imperfections that are not so strong in pendulum clocks.)

We can use even faster processes to further improve the accuracy of timekeeping. But it is difficult to make mechanical oscillators — pendulums, vibrating crystals, or anything else — that can oscillate at such high frequencies. We therefore need some natural oscillator with a very high frequency. For that, we can use atoms because their internal energy can only have certain discrete values and an energy difference between two levels corresponds to a certain frequency of electromagnetic field that can be emitted or absorbed by this energy transition.

On-chip atomic clock.
On-chip atomic clock. (Public domain, source.)

Atomic clocks have two advantages: they are natural oscillators (not human-made) so that atoms of a given species will always oscillate at the same frequency, and they oscillate very fast — billion times a second. There is a price to pay for this precision because, naturally, it is extremely difficult to count individual periods of a system oscillating so fast. It can still be done, though, and such clocks are now the most precise time standards we have — their error is about one second in 100 million years.

Some atomic transitions have even higher frequencies than a few gigahertz which are used in atomic clocks now. Transitions in the optical domain (in contrast to microwave transitions for gigahertz frequencies) oscillate million billion times in a single second. Those oscillations are, of course, even more challenging to count than the oscillations in current atomic clocks. Clocks based on the optical transitions — called optical clocks — are nevertheless being developed and promise incredible accuracy. With optical clocks, it is possible to measure the age of the universe (about 14 billion years) with error smaller than one second!

What are such highly precise measurements good for? Without well synchronised time across the Earth, internet communication (and any other form of high-speed communication, including mobile phone networks or TV and radio signals) would be much slower. Navigation systems (such as GPS) would not work with a few-metre precision. GPS receivers measure time delay in signals from satellites and determine the position from the delay and positions of the satellites. More precise time means better accuracy of the navigation system.

There are also many scientific applications. With precise time measurements, we can, for instance, test one of the predictions of Einstein’s general relativity which states that the flow of time is affected by a gravitational field. In a strong gravitational field, time passes slower than when the gravity is weaker. The effect is very weak in the conditions on Earth but it still has to be taken into account for satellite systems. Current atomic clocks are, in fact, so precise that this difference in passage of time can be measured in two places that are about ten centimetres above one another.

New page on the basics of quantum physics

Do you have absolutely no knowledge about quantum physics? Do you want to get at least a basic understanding about what it is, why it is important, or how it can be relevant to everyday life? I just added a new page that tries to explain some of these basic questions in (I hope) an attractive form. This page is a work in progress, so remember to check back now and then as more and more issues will be answered! Also, if you have some questions yourselves that you would like answered, just ask and I will try to add them to the list.

Simplifying quantum systems

claimtoken-552906769c445

I already talked about using measurements and feedback in quantum physics and how these tools can be used to prepare interesting quantum states. But it is not an easy task — experimental realisations require ultrafast electronics to apply feedback in real time. And theoretical analyses? Those are not easy either.

When monitoring the outside field of an optical cavity with an atom inside, the exact dynamics of the field is often irrelevant.
When monitoring the outside field of an optical cavity with an atom inside, the exact dynamics of the field is often irrelevant.

Take a simple example — an atom that is placed inside an optical cavity. We measure what leaves the cavity and want to use the information we get to control the state of the atom. The first thing a theoretical physicist will do is write the equation that describes the time evolution of the whole system (i.e., the atom and the cavity field). But we do not really care what happens with the field. We only want to know what the field can tell us about the atom. If only there was a way to get an equation that describes only the dynamics of the atom…

There actually is a whole bunch of methods that can help us do just that — they are generally known under the name adiabatic elimination. We eliminate the uninteresting part of the system (the cavity mode), leaving an equation just for the relevant part (the atom). And why is it called adiabatic? Because all the methods assume that the uninteresting part evolves much faster than the interesting one — the cavity mode will thus quickly reach a steady state (i.e., a state that does not further evolve in time) and, as the atom slowly evolves, the steady state of the cavity will follow it. And physicists call such following of one system by another adiabatic.

All these methods generally suffer from two problems. Firstly, they work only if the cavity field is in a so-called pure state. These are some rather special quantum states that you can get if there is no thermal noise (i.e., the system is cold or uses high — typically optical — frequencies). You can imagine thermal noise as if you were shining inside the cavity with a regular light bulb. Its light is very chaotic (much more than that of a laser) so the state of the field inside the cavity will be chaotic as well. And that is more difficult to deal with than when the light entering the cavity a nice coherent laser beam.

Secondly, adiabatic elimination methods can work well if you need to eliminate a single field.  If you have a more complicated system that you need to get rid of, it is not that simple. You can, in principle, eliminate more fields one by one but that takes a long time. And the order in which you eliminate imposes additional conditions on the system. (You start by eliminating the fastest of the fields, then the second fastest, and so on.)

Complex systems -- a pair of qubits coupled to optomechanical transducers, for instance -- are too large and cannot be numerically simulated.
Complex systems — a pair of qubits coupled to optomechanical transducers, for instance — are too large and cannot be numerically simulated.

Imagine now that you want to work with a more complicated system — you want to entangle two superconducting qubits coupled to optomechanical transducers (like I do). You have the transducers — consisting of a microwave cavity, a mechanical oscillator, and an optical cavity — that you do not really care about and the qubits that are the important part of the system. So if you now want to eliminate the transducers, you have a problem because you have many fields (three for each transducer) and mechanical oscillators which will have thermal noise.

Here, the adiabatic elimination becomes more crucial than with a single atom and a single optical cavity. Whereas it is just a matter of convenience for the simple system, the two qubits with two optomechanical transducers cannot be numerically simulated exactly. You would need several terabytes of memory to store state of such a large system in a single point in time. And what should the feedback applied to the qubits look like? You cannot guess that well with such a complicated system.

In order to be able to deal with such big and complicated systems, we had to develop a brand new method of adiabatic elimination and we had to take a completely different approach than people usually take. We made a different assumption than the usual purity — instead, we assume that the eliminated system is Gaussian. This means that there are some quantities in the system (our optomechanical transducer) that behave as a classical Gaussian probability distribution. That is true for a large class of systems (including our optomechanical transducers) and makes it possible to describe the transducers using parameters of these Gaussian distributions which is much easier than using a full quantum state.

The applications of this method are much broader than this particular system. Measurements and feedback are often used in superconducting systems which typically interact with microwave fields. As a result, thermal noise can be present and standard methods of adiabatic elimination do not work. The way around this problem is to assume that the noise is so small that we can safely neglect it and apply standard methods of adiabatic elimination. This assumption usually works relatively well but our new method works even better (with the almost nonexistent noise!) and is not much more complicated to deal with.

There is more to adiabatic elimination than tractability of numerical simulations (which is still pretty important!). It can give us information about the evolution of the small part of system that we are really interested in. A trained scientist can make a good guess based on the evolution of the whole system (including, for instance, the cavity field) but understanding the exact role of various system parameters (such as the amount of the thermal noise) is not always so easy. Now, we have made an important step in understanding these issues.

This post summarises the main results of a paper I wrote with my colleagues on adiabatic elimination with continuous measurements. A free preprint can be found at arXiv.

Building the quantum internet

Do you remember your first computer? And your first internet connection?  Sure, they were not as powerful as today’s technology but it was something completely new and opened many possibilities. A quantum computer, ideally connected to quantum internet, must then be even more remarkable. Although it is true that algorithms for quantum computers focus on abstract mathematical tasks such as factoring large numbers, everyday life applications will certainly come as well. After all, classical computers were also originally seen solely as calculators.

We have now pretty good idea what the quantum internet could look like. Because quantum systems are very sensitive to disturbances and quantum features do not survive for long, the ideal medium for transmitting quantum signals is light. It travels fast and almost does not interact with the surrounding environment so quantum effects can survive a long-distance transfer.

Quantum computers, on the other hand, can in principle be built in many different ways. Some scientists trap ions in electric fields and use them as the basic building blocks. Others try to build the whole quantum computer from a single molecule and use different parts of this molecule as quantum bits that store information. Some try to use light to perform quantum computations since such quantum computers are then easily connected via quantum internet. There are also those who use superconducting systems.

In a way, superconducting systems are, in their form, most similar to classical computers. You can build a chip from the right material, similarly to an integrated circuit in a normal computer. Then you cool the chip down to temperature of a few Kelvin (around -270 degrees Celsius) and it becomes superconducting — it starts to transmit current without any resistance. Quantum bits can then be represented by superconducting currents of various strength, similar to normal computers.

There is just one problem with superconducting quantum computers — it is not possible to connect them to optical quantum internet. Energy of superconducting qubits is much smaller than that of an optical photon so they do not interact well. Superconducting systems can interact with microwave fields but those cannot be transmitted as easily as light because they require low temperatures (just like superconducting systems) to overcome noise.

The solution is simple: We let the superconducting qubits interact with microwave photons which can then be converted to light using mechanical oscillators. Or we can even skip the microwave field and couple superconducting qubits directly to mechanical oscillators. That is possible because superconducting qubits are built using capacitors and some other elements. If one of the capacitor plates can vibrate, its position will affect the state of the qubit and the state of the qubit, in turn, determines the position of the vibrating plate.

Because we do not have quantum computers just yet, we can start with a smaller task — we can try to entangle two superconducting qubits that sit on two different chips. That would be a first step towards building quantum internet with superconducting systems.

Measurement of the number of excitations
Number of excitations of two qubits can be measured if the signal from the first qubit (the sphere with arrow) is converted using a transducer (blackbox), transmitted and converted back.

The approach I like is based on measurement feedback and there are two options how to use it. The first one uses entanglement swapping where each of the qubits interacts with a microwave field in a way that generates entanglement between them. The microwave field is then converted to light and travels to a detector where both the fields are measured together. In this way, the entangled state is teleported from a microwave field to a qubit and both qubits become entangled.

Entanglement swapping with two qubits
Entangling qubits with their transducers locally and then performing joint measurement on the light fields, one can entangle the two qubits.

Another option is to engineer the system in such a way that we perform a measurement of the number of excitations of the two qubits. Each qubit has two levels — denoted by 0 and 1 and thus showing the number of excitations in the qubit. If we prepare the qubits in a suitable state and the measurement reveals that one qubit is excited but we do not know which one, they become entangled. That is commonly done with superconducting qubits (without coupling to light, though). With the optical link, this can be done in the following way: we let one qubit interact with a microwave field which then gets converted to light. The light gets transmitted to the second qubit where it is converted back to microwave frequency, interacts with the second qubit, and is measured.

So far it seems that such tasks can be performed with mechanical oscillators that need not be much better that what is available currently. We thus might see the first steps towards quantum networks with superconducting qubits in the near future. But it will still be a long way to go if we want to build quantum computers connected by quantum internet.

This post is loosely based on talk I held at the Spring meeting of the German Physical Society in Heidelberg, March 2015.

Is nature scared of emptiness?

There can never be a truly empty space. That was the opinion of many scholars from the times of ancient Greece up to the beginning of the twentieth century. When the idea of aether as a medium in which light can travel has been refuted, the existence of vacuum became widely accepted. But then the quantum revolution came, and nothing is ever simple with quantum physics.

The main obstacle in achieving space that is entirely empty is the Heisenberg uncertainty principle. It states that the position and momentum of an object can never be known exactly. This is, furthermore, not just due to technical imperfections in measuring these quantities; the object itself does not know them exactly.

Let us now take a glass cell and pump all air out. If we also leave it in complete darkness, there will be no light and, therefore, no electromagnetic field and no atoms or molecules inside, right?

Not quite. Light is an oscillating electromagnetic field and as such can be mathematically described as a harmonic oscillator, similar to a pendulum. And a harmonic oscillator has a position and momentum which, even at ground state (i.e., with no light), cannot be exactly zero but have some uncertainty. So there still is some electromagnetic field present, even in complete darkness!

But things can get even weirder because in quantum physics, virtually everything can be described as a harmonic oscillator. For every kind of particles, there can be defined a field whose excitations are the respective particles. For light, there is the electromagnetic field and the particles are photons, electrons are excitations in an electron field, and so on. And each harmonic oscillator has to follow the uncertainty principle. In our glass cell, we thus have a small bit of fluctuations of the electromagnetic field but also fluctuations for electrons and other particles. Vacuum is an endlessly boiling soup where every now and then an electron pops out and disappears again, then a quark, then something else.

Two metallic plates placed in vacuum will attract or repel each other due to vacuum fluctuations.
Two metallic plates placed in vacuum will attract or repel each other due to vacuum fluctuations.

Does all that sound ridiculous? It turns out that these phenomena have observable effects. Take, for instance, two metallic plates placed in vacuum. One would naively expect that nothing will happen to them since they are in vacuum. But we know better — there are always fluctuations, and these will be smaller in the space between the plates than everywhere around. As a result, the plates will attract each other; in a different configuration than parallel, they could even repel. This behaviour is known as Casimir effect (though I am stretching things a bit here — only the fluctuations of the electromagnetic field are important for the Casimir effect) and has already been observed in an experiment.

Another, even more important evidence of fluctuations of the vacuum is the existence of spontaneous emission. If you excite an atom (for example by shining light on it) it will eventually radiate the energy it absorbed and end up in its ground state. But from the point of view of classical physics, this happens only when there is electromagnetic field around the atom. This means that an excited atom in utter darkness should stay excited — but it does not! This can only be explained by quantum physics; fluctuations in the vacuum are strong enough to kick the atom to its ground state while emitting a photon, similarly to the presence of electromagnetic field in the classical picture.

So remember — vacuum (for instance the vast empty space between you and the nearest star when watching the skies at night) is not empty. It is alive with many particles that we can never directly see, swirling around. And nature maybe, after all, really is scared of emptiness.

Wi-Fi for a quantum computer

The basic picture of an optomechanical system, that even many scientists keep in mind, is that of a cavity with one movable mirror. But that is not the only way to achieve coupling between light and mechanical vibrations. Every time light is strong enough (and the mechanical oscillator light enough), the light can be used to control the vibrational state of the mechanical system.

Optomechanical systems can take on various forms, such as a vibrating mirror inside a cavity or a vibrating microdisk.
Optomechanical systems can take on various forms, such as a vibrating mirror inside a cavity or a vibrating microdisk.

People have studied all sorts of different systems this way. One option is to use a cavity (with both mirrors fixed) and put a vibrating membrane inside. Other scientists work with microdisks where light travels around thanks to total internal reflection; if the disk can vibrate, strong light will excite mechanical vibrations of the disk. And there are optomechanical platforms that are more exotic than these examples.

The beauty of the theoretical description of such systems lies in the fact that they are all described by the same mathematics. This stays true even if we do not use visible light but a microwave field which cannot be trapped in a cavity using two simple mirrors. Instead, microwave cavities have the form of LC circuits — basic electrical circuits with an inductor (basically a coil) and a capacitor (two conducting plates separated by a thin layer of a dielectric material) that have been used in electronics for decades.

Optomechanics can be studied even in microwave systems, where the role of the optical cavity is taken by an LC circuit and vibrating mirror is replaced by an oscillating capacitor plate.
Optomechanics can be studied even in microwave systems, where the role of the optical cavity is taken by an LC circuit and vibrating mirror is replaced by an oscillating capacitor plate.

If such a circuit is to be used in the quantum regime, though, it is not that simple. The circuit has to be built from a superconducting material (and cooled down for the experiments) so that the electrical signals can travel through the circuit many times without being absorbed. If we now make one of the capacitor plates vibrating, usually by making it from a membrane, the following happens:

The microwave field acts as a varying electric field across the capacitor. Since the membrane can freely vibrate, it will move in accordance with the electric field. But that results in varying distance between the capacitor plates which affects the resonance of the LC circuit in a way similar to a moving mirror in an optical cavity. The whole system is then described in the same way as other optomechanical systems — even though we now use a microwave field, instead of visible light!

Imagine that we now take such an LC circuit with a vibrating membrane and put the membrane in an optical cavity (either by making it an end mirror or putting it inside a closed cavity). The microwaves as well as the visible light can now swap state with the vibrating membrane. Using such a system, we can, for example, swap the state of the microwave field and the membrane and then swap the state of the membrane and the visible light. Any signal that was initially encoded in the microwave field has now been converted to light.

Combining microwave and optical cavity with a vibrating membrane, we get a system that is capable of converting microwaves to visible light and vice versa.
Combining microwave and optical cavity with a vibrating membrane, we get a system that is capable of converting microwaves to visible light and vice versa.

Such a conversion is commonly done in the classical world — Wi-Fi uses microwaves to send signals between your computer and router and light is used in optical fibres to transmit these signals over long distances to a server. This is done by the router measuring the microwave signal, transmitting it to a modem via a cable where it is measured again, sent in the form of light to the other end where the process is repeated in reverse. That is something you cannot do in the quantum world where every measurement destroys the quantum nature of the signal. This is why more sophisticated methods — such as swapping the state with a mechanical oscillator — have to be used.

There is one immediate application for these opto-electromechanical systems (i.e., systems comprising an optical cavity, an LC circuit, and a mechanical oscillator). The conversion of microwave signals to visible light can be used to improve detection efficiency of weak microwave fields. That is a task that is very difficult to do. But if you could efficiently convert these signals to light, you would need to measure weak light pulses instead, which is easier. Radio astronomers, for instance, can then use these systems to detect weaker sources of radio waves in the universe. Magnetic resonance imaging can profit by reaching better accuracy than with current detection strategies, which could lead to earlier diagnoses of serious illnesses. But we still have to wait for these applications — there is a long way between a successful experimental demonstration and a practical use of an effect.