MIT scientists tune the entanglement structure in an array of qubits
New Post has been published on https://thedigitalinsider.com/mit-scientists-tune-the-entanglement-structure-in-an-array-of-qubits/
MIT scientists tune the entanglement structure in an array of qubits
Entanglement is a form of correlation between quantum objects, such as particles at the atomic scale. This uniquely quantum phenomenon cannot be explained by the laws of classical physics, yet it is one of the properties that explains the macroscopic behavior of quantum systems.
Because entanglement is central to the way quantum systems work, understanding it better could give scientists a deeper sense of how information is stored and processed efficiently in such systems.
Qubits, or quantum bits, are the building blocks of a quantum computer. However, it is extremely difficult to make specific entangled states in many-qubit systems, let alone investigate them. There are also a variety of entangled states, and telling them apart can be challenging.
Now, MIT researchers have demonstrated a technique to efficiently generate entanglement among an array of superconducting qubits that exhibit a specific type of behavior.
Over the past years, the researchers at the Engineering Quantum Systems (EQuS) group have developed techniques using microwave technology to precisely control a quantum processor composed of superconducting circuits. In addition to these control techniques, the methods introduced in this work enable the processor to efficiently generate highly entangled states and shift those states from one type of entanglement to another — including between types that are more likely to support quantum speed-up and those that are not.
“Here, we are demonstrating that we can utilize the emerging quantum processors as a tool to further our understanding of physics. While everything we did in this experiment was on a scale which can still be simulated on a classical computer, we have a good roadmap for scaling this technology and methodology beyond the reach of classical computing,” says Amir H. Karamlou ’18, MEng ’18, PhD ’23, the lead author of the paper.
The senior author is William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and of physics, director of the Center for Quantum Engineering, leader of the EQuS group, and associate director of the Research Laboratory of Electronics. Karamlou and Oliver are joined by Research Scientist Jeff Grover, postdoc Ilan Rosen, and others in the departments of Electrical Engineering and Computer Science and of Physics at MIT, at MIT Lincoln Laboratory, and at Wellesley College and the University of Maryland. The research appears today in Nature.
Assessing entanglement
In a large quantum system comprising many interconnected qubits, one can think about entanglement as the amount of quantum information shared between a given subsystem of qubits and the rest of the larger system.
The entanglement within a quantum system can be categorized as area-law or volume-law, based on how this shared information scales with the geometry of subsystems. In volume-law entanglement, the amount of entanglement between a subsystem of qubits and the rest of the system grows proportionally with the total size of the subsystem.
On the other hand, area-law entanglement depends on how many shared connections exist between a subsystem of qubits and the larger system. As the subsystem expands, the amount of entanglement only grows along the boundary between the subsystem and the larger system.
In theory, the formation of volume-law entanglement is related to what makes quantum computing so powerful.
“While have not yet fully abstracted the role that entanglement plays in quantum algorithms, we do know that generating volume-law entanglement is a key ingredient to realizing a quantum advantage,” says Oliver.
However, volume-law entanglement is also more complex than area-law entanglement and practically prohibitive at scale to simulate using a classical computer.
“As you increase the complexity of your quantum system, it becomes increasingly difficult to simulate it with conventional computers. If I am trying to fully keep track of a system with 80 qubits, for instance, then I would need to store more information than what we have stored throughout the history of humanity,” Karamlou says.
The researchers created a quantum processor and control protocol that enable them to efficiently generate and probe both types of entanglement.
Their processor comprises superconducting circuits, which are used to engineer artificial atoms. The artificial atoms are utilized as qubits, which can be controlled and read out with high accuracy using microwave signals.
The device used for this experiment contained 16 qubits, arranged in a two-dimensional grid. The researchers carefully tuned the processor so all 16 qubits have the same transition frequency. Then, they applied an additional microwave drive to all of the qubits simultaneously.
If this microwave drive has the same frequency as the qubits, it generates quantum states that exhibit volume-law entanglement. However, as the microwave frequency increases or decreases, the qubits exhibit less volume-law entanglement, eventually crossing over to entangled states that increasingly follow an area-law scaling.
Careful control
“Our experiment is a tour de force of the capabilities of superconducting quantum processors. In one experiment, we operated the processor both as an analog simulation device, enabling us to efficiently prepare states with different entanglement structures, and as a digital computing device, needed to measure the ensuing entanglement scaling,” says Rosen.
To enable that control, the team put years of work into carefully building up the infrastructure around the quantum processor.
By demonstrating the crossover from volume-law to area-law entanglement, the researchers experimentally confirmed what theoretical studies had predicted. More importantly, this method can be used to determine whether the entanglement in a generic quantum processor is area-law or volume-law.
“The MIT experiment underscores the distinction between area-law and volume-law entanglement in two-dimensional quantum simulations using superconducting qubits. This beautifully complements our work on entanglement Hamiltonian tomography with trapped ions in a parallel publication published in Nature in 2023,” says Peter Zoller, a professor of theoretical physics at the University of Innsbruck, who was not involved with this work.
“Quantifying entanglement in large quantum systems is a challenging task for classical computers but a good example of where quantum simulation could help,” says Pedram Roushan of Google, who also was not involved in the study. “Using a 2D array of superconducting qubits, Karamlou and colleagues were able to measure entanglement entropy of various subsystems of various sizes. They measure the volume-law and area-law contributions to entropy, revealing crossover behavior as the system’s quantum state energy is tuned. It powerfully demonstrates the unique insights quantum simulators can offer.”
In the future, scientists could utilize this technique to study the thermodynamic behavior of complex quantum systems, which is too complex to be studied using current analytical methods and practically prohibitive to simulate on even the world’s most powerful supercomputers.
“The experiments we did in this work can be used to characterize or benchmark larger-scale quantum systems, and we may also learn something more about the nature of entanglement in these many-body systems,” says Karamlou.
Additional co-authors of the study are Sarah E. Muschinske, Cora N. Barrett, Agustin Di Paolo, Leon Ding, Patrick M. Harrington, Max Hays, Rabindra Das, David K. Kim, Bethany M. Niedzielski, Meghan Schuldt, Kyle Serniak, Mollie E. Schwartz, Jonilyn L. Yoder, Simon Gustavsson, and Yariv Yanay.
This research is funded, in part, by the U.S. Department of Energy, the U.S. Defense Advanced Research Projects Agency, the U.S. Army Research Office, the National Science Foundation, the STC Center for Integrated Quantum Materials, the Wellesley College Samuel and Hilda Levitt Fellowship, NASA, and the Oak Ridge Institute for Science and Education.
0 notes
I do often wonder what it feels like to like the popular thing.
"Dude what are you talking about, you're literally a Marvel stan-" I know, that's not what I mean.
I mean like... Okay, example, Destiel. You don't have to search it out, you don't have to force your algorithm to show you, it just does, because it's popular (on Tumblr anyway). I don't follow anything Supernatural related and I still see it.
And I like the jokes, but I honestly couldn't care less otherwise. The only SPN character I actually care about is Sam and maybe Jack (but he's in the later seasons that I didn't really watch so...). And so much of the fandom is focused on Destiel that is sometimes frustrates me when I have to go out of my way to see what I want instead of it.
That kind of thing.
What does it feel like for your opinions to align so much with the popular fandom consensus? To not have to go out of your way to see your faves and biases, to not have to be careful about what you interact with, lest it fucks up your algorithm and you have to do it all over again. To not have it even work most of the time.
More examples.
I really like Louis Tomlinson but I've never been on a site where the algorithm didn't try to shove Larry and Harry Styles down my throat when I started interacting with posts about him. (Naturally, I ignore those posts, but they're such a big part of the fandom that ignoring them makes the algorithm stop showing me posts about Louis too. I've honestly given up.)
My BTS bias is so far from the most popular member that my FYP doesn't even show me posts with him, like, period. (Not to mention how many people who don't stan BTS actively hate them and/or the fandom, and/or make awful jokes about them)
And then there's my whole thing with Captain America where none of my irl friends even like him and so when I have to ramble about him I have to go scream into the internet void and hope it screams back.
So, as someone who always seems to fall for the fandom underdog... I just wonder what it feels like to not have to do all this.
0 notes
An open letter to @staff
I already submitted this to Support under "Feedback," but I'm sharing it here too as I don't expect it to get a response, and I feel like putting in out in public may be more effective than sending it off into the void.
The recent post on the Staff blog about changing tumblr to an algorithmic feed features a large amount of misinformation that I feel staff needs to address, openly and honestly, with information on where this data was sourced at the very least.
Claim 1: Algorithms help small creators.
This is false, as algorithms are designed to push content that gets engagement in order to get it more engagement, thereby assuring that the popular remain popular and the small remain small except in instances of extreme luck.
This can already be seen on the tumblr radar, which is a combination of staff picks (usually the same half-dozen fandoms or niche special interests like Lego photography) which already have a ton of engagement, or posts that are getting enough engagement to hit the radar organically. Tumblr has an algorithm that runs like every other socmed algorithm on the planet, and it will decimate the reach of small creators just like every other platform before it.
Claim 2: Only a small portion of users utilize the chronological feed.
You can find a poll by user @darkwood-sleddog here that at the time of writing this, sits at over 40 THOUSAND responses showing that over 96 percent of them use the chronological feed*. Claiming otherwise isn't just a misstatement, it's a lie. You are lying to your core userbase and expecting them to accept it as fact. It's not just unethical, it's insulting to people who have been supporting your platform for over a decade.
Claim 3: Tumblr is not easy to use.
This is also 100% false and you ABSOLUTELY know it. Tumblr is EXTREMELY easy to use, the issue is that the documentation, the explanations of features, and often even the stability of the service is subpar. All of this would be very easy for staff to fix, if they would invest in the creation of walkthroughs and clear explanations of how various site features work, as well as finally fixing the search function. Your inability to explain how your service works should not result in completely ignoring the needs and wants of your core long-term userbase. The fact that you're more willing to invest in the very systems that have made every other form of social media so horrifically toxic than in trying to make it easier for people to use the service AS IT WORKS NOW and fixing the parts that don't work as well speaks volumes toward what tumblr staff actually cares about.
You will not get a paycheck if your platform becomes defunct, and the thing that makes it special right now is that it is the ONLY large-scale socmed platform on THE ENTIRE INTERNET with a true chronological feed and no aggressive algorithmic content serving. The recent post from staff indicates that you are going to kill that, and are insisting that it's what we want. It is not. I'd hazard to guess that most of the dev team knows it isn't what we want, but I assume the money people don't care. The user base isn't relevant, just how much money they can bring in.
The CEO stated he wanted this to remain as sort of the last bastion of the Old Internet, and yet here we are, watching you declare you intend to burn it to the ground.
You can do so much better than this.
Response to the Update
Under the cut for readability, because everything said above still applies.
I already said this in a reblog on the post itself, but I'm adding it to this one for easy access: people read it that way because that's what you said.
Staff considers the main feed as it exists to be "outdated," to the point that you literally used that word to describe it, and the main goals expressed in this announcement is to figure out what makes "high-quality content" and serve that to users moving forward.
People read it that way because that is what you said.
*The final results of the poll, after 24 hours:
136,635 votes breaks down thusly:
An algorithm based feed where I get "the best of tumblr." @ 1.3% (roughly 1,776 votes)
Chronological feed that only features blogs I follow. @ 95.2% (roughly 130,077 votes)
This doesn't affect me personally. @ 3.5% (roughly 4,782 votes)
24K notes
·
View notes
To anyone concerned about KOSA and the state of the web
My wife, @utopicwork, is working hard on a "next internet" with the primary goal of being a place where marginalized people can safely and privately communicate without being restricted by the whims of advertising algorithms and malicious bills.
This would be a decentralized peer-to-peer network, which means
A) it won't be easily shut down
B) it's built around the best aspect of tumblr: being able to choose who you do and don't connect to
She is a highly qualified computer scientist with years of experience in cybersecurity, web development and network technology. However, she can't do this alone. A trans woman is fighting hard for the future of free communication so please support her.
3K notes
·
View notes