分享一个外网的关于量子计算的学术观点:通用量子计算在未来可以预见的时间内不能够被实现
外网原文地址:
https://spectrum.ieee.org/the-case-against-quantum-computing
量子计算可以大致分为两个学科,一个是量子物理,一个是计算机;量子物理就是研究如何把量子计算机做出来,而计算机专业则是要研究如何给研究出来的量子计算机写程序代码。首先要说明下,这里分享的post是一个物理学专家的观点,他的观点我个人的理解就是用量子计算机实现现在的计算机的通用的功能需要大量的量子比特位,而这个技术在可以预见的时间内是难以完成的,甚至是不可能完成的,而且随着量子比特数的增加不仅是量子计算机的硬件技术的问题更是从现有的理论上难以解决大数位的纠错问题,并且从现有的研究发展来看这几个问题是不可能短期内解决的,甚至是永远难以解决的,然而现在的相关领域都在热炒这个量子计算的概念,尤其是那些记者,而且这个领域的研究者成为利益获得者后自然就不会依旧有那么高的研究热情了,到那时这些问题依旧,也就自然不会如此火热了。面对量子计算的话题的过于火热的这个问题,已然有其他科学家提出过不同的观点,不过并没被重视。
PS. 虽然我是搞计算机技术的,对量子计算的物理原理并不很清楚,而且对于如此有争议的话题讨论也明显不够咖位,但是我也对这个观点表示认同。不仅是因为量子计算的硬件能够突破技术限制,更是因为给量子计算机编写代码这个原理就和现有图灵机是截然不同的,虽然现在有很多人在给量子计算机编写代码,但是大家要清楚这些工作都是建立在假设的前提之下的。也就是说现在的量子计算的代码几乎都是在用现有计算机模拟的量子计算机的仿真环境下进行的,在量子计算机是否能研究出来这个问题本就有争议的前提下搞仿真情况下的(假定量子计算机已经存在)代码编程或许真的只能够被算作是一种科学研究了。
外网原文内容(中文版,由ChatGPT3.5翻译):
量子计算风头正劲。似乎每天都有新闻媒体描述这项技术所承诺的非凡事物。大多数评论员忽略了,或者只是草率地跳过了这样一个事实,即人们已经在量子计算上工作了几十年,却没有任何实际的成果可展示。
我们被告知量子计算机可能会在许多领域取得突破,包括材料和药物发现、复杂系统的优化以及人工智能。我们被确信,量子计算机将“永远改变我们的经济、工业、学术和社会格局”。甚至有人告诉我们,量子计算机可能很快就会“破解保护世界上最敏感数据的加密”技术。已经到了这样一个地步,许多物理学各个领域的研究人员感到有义务通过声称他们的工作与量子计算有关来为他们正在进行的任何工作辩护。
与此同时,政府研究机构、学术部门(其中许多由政府机构资助)、以及企业实验室每年都在花费数十亿美元开发量子计算机。在华尔街,摩根士丹利等金融巨头预计量子计算很快将成熟,并迫切希望弄清楚这项技术如何能够帮助它们。
这已经成为某种自我延续的军备竞赛,许多组织似乎只是为了避免被抛在后面而参与竞赛。一些世界顶级的技术人才,如谷歌、IBM和微软等地,正在努力工作,并在先进实验室中使用丰富资源,以实现他们对量子计算未来的愿景。
鉴于这一切,人们自然会想知道:什么时候将会建造出有用的量子计算机?最乐观的专家估计需要5到10年。更为谨慎的人士预测需要20到30年。(顺便说一句,类似的预测在过去20年中一直存在。)我属于回答“在可预见的未来内不会”这一问题的极少数人。作为在量子和凝聚态物理领域进行数十年研究的人,我形成了非常悲观的观点。这一观点基于对必须克服的巨大技术挑战的理解,以使量子计算得以实现。
量子计算的概念最早出现在将近40年前,即1980年,当时俄罗斯出生的数学家尤里·马宁(现在在德国波恩的马克斯·普朗克数学研究所工作)首次提出了这一概念,尽管当时它还相对模糊。然而,这个概念在接下来的一年里真正引起了关注,当时加利福尼亚理工学院的物理学家理查德·费曼独立提出了这一概念。
理查德·费曼意识到,当受到研究的量子系统变得过于复杂时,对这些系统进行计算机模拟就变得不可能。于是,费曼提出了一个观点,即计算机本身应该以量子模式运行:“天哪,自然不是经典的,如果你想对自然进行模拟,最好让它成为量子力学,哎呀,这是一个很棒的问题,因为它看起来并不那么容易,”他评论道。几年后,牛津大学的物理学家大卫·德沃奇正式描述了一台通用量子计算机,这是通用图灵机的量子模拟。
然而,这个领域直到1994年才引起更多关注,当时数学家彼得·肖尔(当时在贝尔实验室工作,现在在麻省理工学院)提出了一种理想量子计算机的算法,能够比传统计算机更快地因式分解非常大的数字。这一卓越的理论成果引发了对量子计算的广泛兴趣。此后,关于这一主题的研究论文成千上万篇,主要是理论性的,不断涌现并呈增长态势。
量子计算的基本思想是以一种与传统计算机完全不同的方式存储和处理信息,传统计算机基于经典物理学。概括了许多细节,可以说传统计算机的操作是通过操纵大量微小的晶体管来实现的,这些晶体管基本上是作为开关工作的,它们在计算机时钟的周期之间改变状态。
因此,在任何给定时钟周期开始时,经典计算机的状态可以通过一长串位的序列来描述,这个序列在物理上对应于各个晶体管的状态。对于N个晶体管,计算机可能处于2N个可能的状态之一。在这样的计算机上进行计算基本上包括根据预定的程序将其一些晶体管在“开”和“关”状态之间切换。
在量子计算中,经典的两态电路元件(晶体管)被一个称为量子位或量子比特(qubit)的量子元件所取代。与传统位一样,它也有两种基本状态。虽然多种物理对象都可以合理地用作量子位,但最简单的是使用电子的内部角动量或自旋,它具有奇特的量子特性,即在任何坐标轴上只有两个可能的投影:+1/2 或 -1/2(以普朗克常数为单位)。无论选择哪个轴,都可以用↑和↓来表示电子自旋的两种基本量子状态。
事情变得奇怪的地方就在这里。在量子比特中,这两种状态并不是唯一可能的。这是因为电子的自旋状态由一个量子力学波函数描述。而这个函数涉及到两个复数,α和β(称为量子幅度),作为复数,它们有实部和虚部。这两个复数α和β各有一定的大小,根据量子力学的规则,它们的平方大小必须加起来等于1。
这是因为这两个平方大小对应于当你进行测量时电子自旋处于基本状态↑和↓的概率。而且因为这是唯一可能的结果,所以这两个相关的概率必须加起来等于1。例如,如果找到电子处于↑状态的概率是0.6(60%),那么找到它处于↓状态的概率必须是0.4(40%)—没有其他可能。
与经典比特只能处于其两个基本状态之一不同,量子比特(qubit)可以处于由量子幅度α和β的值定义的可能状态的连续范围内。这种特性经常被描述为一个有点神秘和令人生畏的说法,即一个量子比特可以同时存在于其↑和↓两个状态中。
是的,量子力学常常违背直觉。但这个概念不应该用如此令人困惑的语言来表达。相反,可以将其想象为位于x-y平面上且相对于x轴倾斜45度的矢量。有人可能会说,这个矢量同时指向x和y方向。从某种意义上说,这个说法是正确的,但这并不是一个真正有用的描述。在我看来,将一个量子比特描述为同时处于↑和↓状态是同样没有帮助的。然而,记者们几乎已经成为习惯性地这样描述它。
在具有两个量子比特的系统中,有22或4种基本状态,可以写成(↑↑)、(↑↓)、(↓↑)和(↓↓)。理所当然地,这两个量子比特可以由涉及四个复数的量子力学波函数来描述。在一般情况下,具有N个量子比特的系统的状态由2N个复数描述,这些复数受到它们的平方幅值总和必须等于1的限制。
在任意给定时刻,常规计算机的N个位必须处于其2N种可能状态之一,而具有N个量子比特的量子计算机的状态由2N个量子振幅的值描述,这些振幅是连续参数(可以取任何值,而不仅仅是0或1)。这是量子计算机被认为具有强大计算能力的原因,但也是它极为脆弱和易受攻击的原因。
这样的计算机如何处理信息呢?这是通过应用一些称为“量子门”的变换来完成的,这些变换以一种精确和可控的方式改变这些参数。
专家们估计,用于构建一台有用的量子计算机,即可以在解决某些有趣问题方面与您的笔记本电脑竞争的计算机,所需的量子比特数量在1,000到100,000之间。因此,描述这样一台有用量子计算机在任意时刻的状态所需的连续参数数量至少为21,000,也就是说大约为10^300。这是一个非常大的数字。有多大呢?它远远超过可观察宇宙中的亚原子粒子数量。
再强调一下:一台有用的量子计算机需要处理的连续参数集比可观察宇宙中的亚原子粒子数量还要大。
在描述一个可能的未来技术时,一个务实的工程师在这一点上就会失去兴趣。但让我们继续。在任何现实世界的计算机中,都必须考虑错误的影响。在常规计算机中,这些错误出现在一个或多个晶体管在它们应该开启时关闭,反之亦然。这种不希望发生的情况可以通过使用相对简单的纠错方法来处理,这些方法利用硬件中内建的一些冗余水平。
相比之下,维持对一个有用的量子计算机必须处理的 10^300 个连续参数的错误控制是绝对难以想象的。然而,量子计算理论家成功地让公众相信这是可行的。事实上,他们声称某种被称为阈值定理的东西证明了这一点。他们指出,一旦每个量子比特每个量子门的错误率低于某个值,就可以实现无限长的量子计算,但代价是大幅增加所需的量子比特数量。他们认为,通过使用额外的量子比特,你可以通过使用多个物理量子比特形成逻辑量子比特来处理错误。
每个逻辑量子比特需要多少个物理量子比特?没有人真正知道,但估计通常在 1,000 到 100,000 之间。因此,一个有用的量子计算机现在需要一百万个或更多的量子比特。而这个假想的量子计算机的状态定义的连续参数数量,在拥有 1,000 个量子比特时已经超过了天文数字,现在变得更加荒谬。
即使不考虑这些不可思议的庞大数字,令人警醒的是,至今没有人能够找出如何将许多物理量子比特组合成能够计算出有用信息的较小数量的逻辑量子比特。而且这早已是一个长期以来的关键目标。
在21世纪初,应美国情报社区资助的高级研究与开发活动(现为情报先进研究项目活动的一部分)的请求,一组杰出的量子信息专家制定了一项量子计算的路线图。该路线图在2012年设定了一个目标,需要“大约50个物理量子比特”,并且“通过执行所需的容错[量子计算]的全部操作范围,以执行相关量子算法的一个简单实例...” 到了2018年底,这种能力仍然没有被证明。
关于量子计算的大量学术文献在描述实际硬件的实验研究方面明显不足。虽然已经报道的实验证明相当困难,但这些实验必须受到尊重和钦佩。
此类概念验证实验的目标是展示进行基本量子操作的可能性,并演示已经设计的量子算法的一些元素。这些实验中使用的量子比特数量通常不超过10,通常在3到5之间。显然,从5个量子比特到50个(由ARDA专家小组在2012年设定的目标)存在难以克服的实验困难。最有可能的原因与这样一个简单的事实有关,即25 = 32,而250 = 1,125,899,906,842,624。
相比之下,量子计算理论在处理数百万量子比特时似乎没有遇到任何实质性的困难。例如,在误差率的研究中,正在考虑各种噪声模型。已经证明(在某些假设下)由“局部”噪声引起的错误可以通过精心设计且非常巧妙的方法进行纠正,其中包括大规模并行处理,同时对不同的量子比特对应用许多数千个门,并同时进行许多数千次测量。
十五年前,ARDA的专家小组指出,“在某些假设下已经确定,如果可以实现每次门操作的阈值精度,量子错误校正将允许量子计算机无限计算。”这里的关键词是“在某些假设下”。然而,这个由杰出专家组成的小组并没有讨论这些假设是否能够满足的问题。
我认为它们不能。在物理世界中,连续数量(无论是电压还是定义量子机械波函数的参数)既不能精确测量也不能精确操控。也就是说,包括零在内,任何连续可变数量都不能具有确切的值。对于数学家来说,这可能听起来荒谬,但这是我们生活在其中的世界的毫无疑问的现实,任何工程师都知道这一点。
当然,像教室中的学生人数或处于“开”状态的晶体管数量这样的离散数量可以被准确知道。但对于连续变化的量来说并非如此。这一事实解释了传统数字计算机与假设的量子计算机之间的巨大差异。
的确,理论家们对qubit的准备、量子门的操作、测量的可靠性等方面的所有假设都不能完全实现。它们只能以一些有限的精度来逼近。因此,真正的问题是:需要什么样的精度?例如,必须以多大的精确度来实验实现平方根2(这是许多相关量子操作中的一个无理数)?它应该被近似为1.41还是1.41421356237?或者需要更高的精度吗?对于这些关键问题,目前还没有明确的答案。
虽然目前正探讨各种构建量子计算机的策略,但许多人认为最有前途的方法最初由加拿大公司D-Wave Systems提出,现在由IBM、Google、Microsoft等公司进行研究,该方法基于使用相互连接的Josephson结构的量子系统,并冷却到非常低的温度(大约10毫开尔文)。
最终目标是创建一台通用量子计算机,它可以通过Shor算法在因数分解大数方面击败传统计算机,通过同样著名的量子计算算法进行数据库搜索,该算法是Lov Grover于1996年在贝尔实验室开发的,以及其他适用于量子计算机的专业应用。
在硬件方面,正在进行先进的研究,其中包括一款49比特芯片(英特尔)、一款50比特芯片(IBM)和一款72比特芯片(Google)最近已经被制造和研究。这一活动的最终结果尚不十分清楚,特别是因为这些公司并未透露他们的工作细节。
虽然我相信这样的实验性研究对于更好地理解复杂的量子系统是有益的,但我对这些努力是否会导致实用的量子计算机表示怀疑。这样的计算机必须能够在微观层面上,并以极高的精度操作一个由无法想象的巨大参数集特征化的物理系统,其中每个参数都可以取连续的值范围。我们是否能够学会控制定义这样一个系统的超过10^300个连续可变参数?
我的回答很简单。不,永远不会。
我认为,尽管外表如此,量子计算的热情即将结束。因为在技术或科学领域,任何大的泡沫最多存在几十年。在一段时间之后,承诺太多,而且任何一直关注这个主题的人都会对即将到来的突破的进一步宣布感到烦恼。而且,到那个时候,该领域的所有终身教职都已经被占据。支持者们变得年长且热情减退,而年轻一代正在寻找完全新的、更有可能成功的东西。
所有这些问题,以及我在这里没有提到的一些其他问题,都对量子计算的未来提出了严重的质疑。在进行了一些基本但非常艰难的几比特实验之后,量子计算理论变得极为发达,该理论依赖于操纵数千到数百万比特来计算任何有用的东西。这个差距不太可能很快就会被弥合。
在我看来,量子计算研究人员仍应注意IBM物理学家Rolf Landauer几十年前在该领域首次升温时提出的警告。他敦促量子计算的支持者在其出版物中包含类似以下的免责声明:“这个方案,像量子计算的所有其他方案一样,依赖于投机的技术,在其当前形式中未考虑所有可能的噪声、不可靠性和制造误差来源,而且可能行不通。”
编辑说明:本文的一句话最初声称对所需精度的担忧“甚至没有被讨论过”。在2018年11月30日,一些读者向作者指出文献中已经考虑到这些问题后,更正了这句话。修改后的句子现在的表述是:“对于这些关键问题没有明确的答案。”
关于作者
Mikhail Dyakonov 在法国蒙彼利埃大学的Charles Coulomb实验室从事理论物理研究。他的名字与各种物理现象有关,也许最著名的是戴亚科诺夫表面波。
外网原文内容:
Quantum computing is all the rage. It seems like hardly a day goes by without some news outlet describing the extraordinary things this technology promises. Most commentators forget, or just gloss over, the fact that people have been working on quantum computing for decades—and without any practical results to show for it.
We've been told that quantum computers could “provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex systems, and artificial intelligence." We've been assured that quantum computers will “forever alter our economic, industrial, academic, and societal landscape." We've even been told that “the encryption that protects the world's most sensitive data may soon be broken" by quantum computers. It has gotten to the point where many researchers in various fields of physics feel obliged to justify whatever work they are doing by claiming that it has some relevance to quantum computing.
Meanwhile, government research agencies, academic departments (many of them funded by government agencies), and corporate laboratories are spending billions of dollars a year developing quantum computers. On Wall Street, Morgan Stanley and other financial giants expect quantum computing to mature soon and are keen to figure out how this technology can help them.
It's become something of a self-perpetuating arms race, with many organizations seemingly staying in the race if only to avoid being left behind. Some of the world's top technical talent, at places like Google, IBM, and Microsoft, are working hard, and with lavish resources in state-of-the-art laboratories, to realize their vision of a quantum-computing future.
In light of all this, it's natural to wonder: When will useful quantum computers be constructed? The most optimistic experts estimate it will take 5 to 10 years. More cautious ones predict 20 to 30 years. (Similar predictions have been voiced, by the way, for the last 20 years.) I belong to a tiny minority that answers, “Not in the foreseeable future." Having spent decades conducting research in quantum and condensed-matter physics, I've developed my very pessimistic view. It's based on an understanding of the gargantuan technical challenges that would have to be overcome to ever make quantum computing work.
The idea of quantum computing first appeared nearly 40 years ago, in 1980, when the Russian-born mathematician Yuri Manin, who now works at the Max Planck Institute for Mathematics, in Bonn, first put forward the notion, albeit in a rather vague form. The concept really got on the map, though, the following year, when physicist Richard Feynman, at the California Institute of Technology, independently proposed it.
Realizing that computer simulations of quantum systems become impossible to carry out when the system under scrutiny gets too complicated, Feynman advanced the idea that the computer itself should operate in the quantum mode: “Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy," he opined. A few years later, University of Oxford physicist David Deutsch formally described a general-purpose quantum computer, a quantum analogue of the universal Turing machine.
The subject did not attract much attention, though, until 1994, when mathematician Peter Shor (then at Bell Laboratories and now at MIT) proposed an algorithm for an ideal quantum computer that would allow very large numbers to be factored much faster than could be done on a conventional computer. This outstanding theoretical result triggered an explosion of interest in quantum computing. Many thousands of research papers, mostly theoretical, have since been published on the subject, and they continue to come out at an increasing rate.
The basic idea of quantum computing is to store and process information in a way that is very different from what is done in conventional computers, which are based on classical physics. Boiling down the many details, it's fair to say that conventional computers operate by manipulating a large number of tiny transistors working essentially as on-off switches, which change state between cycles of the computer's clock.
The state of the classical computer at the start of any given clock cycle can therefore be described by a long sequence of bits corresponding physically to the states of individual transistors. With N transistors, there are 2N possible states for the computer to be in. Computation on such a machine fundamentally consists of switching some of its transistors between their “on" and “off" states, according to a prescribed program.
In quantum computing, the classical two-state circuit element (the transistor) is replaced by a quantum element called a quantum bit, or qubit. Like the conventional bit, it also has two basic states. Although a variety of physical objects could reasonably serve as quantum bits, the simplest thing to use is the electron's internal angular momentum, or spin, which has the peculiar quantum property of having only two possible projections on any coordinate axis: +1/2 or –1/2 (in units of the Planck constant). For whatever the chosen axis, you can denote the two basic quantum states of the electron's spin as ↑ and ↓.
Here's where things get weird. With the quantum bit, those two states aren't the only ones possible. That's because the spin state of an electron is described by a quantum-mechanical wave function. And that function involves two complex numbers, α and β (called quantum amplitudes), which, being complex numbers, have real parts and imaginary parts. Those complex numbers, α and β, each have a certain magnitude, and according to the rules of quantum mechanics, their squared magnitudes must add up to 1.
That's because those two squared magnitudes correspond to the probabilities for the spin of the electron to be in the basic states ↑ and ↓ when you measure it. And because those are the only outcomes possible, the two associated probabilities must add up to 1. For example, if the probability of finding the electron in the ↑ state is 0.6 (60 percent), then the probability of finding it in the ↓ state must be 0.4 (40 percent)—nothing else would make sense.
In contrast to a classical bit, which can only be in one of its two basic states, a qubit can be in any of a continuum of possible states, as defined by the values of the quantum amplitudes α and β. This property is often described by the rather mystical and intimidating statement that a qubit can exist simultaneously in both of its ↑ and ↓ states.
Yes, quantum mechanics often defies intuition. But this concept shouldn't be couched in such perplexing language. Instead, think of a vector positioned in the x-y plane and canted at 45 degrees to the x-axis. Somebody might say that this vector simultaneously points in both the x- and y-directions. That statement is true in some sense, but it's not really a useful description. Describing a qubit as being simultaneously in both ↑ and ↓ states is, in my view, similarly unhelpful. And yet, it's become almost de rigueur for journalists to describe it as such.
In a system with two qubits, there are 22 or 4 basic states, which can be written (↑↑), (↑↓), (↓↑), and (↓↓). Naturally enough, the two qubits can be described by a quantum-mechanical wave function that involves four complex numbers. In the general case of N qubits, the state of the system is described by 2N complex numbers, which are restricted by the condition that their squared magnitudes must all add up to 1.
While a conventional computer with N bits at any given moment must be in one of its 2N possible states, the state of a quantum computer with N qubits is described by the values of the 2N quantum amplitudes, which are continuous parameters (ones that can take on any value, not just a 0 or a 1). This is the origin of the supposed power of the quantum computer, but it is also the reason for its great fragility and vulnerability.
How is information processed in such a machine? That's done by applying certain kinds of transformations—dubbed “quantum gates"—that change these parameters in a precise and controlled manner.
Experts estimate that the number of qubits needed for a useful quantum computer, one that could compete with your laptop in solving certain kinds of interesting problems, is between 1,000 and 100,000. So the number of continuous parameters describing the state of such a useful quantum computer at any given moment must be at least 21,000, which is to say about 10300. That's a very big number indeed. How big? It is much, much greater than the number of subatomic particles in the observable universe.
To repeat: A useful quantum computer needs to process a set of continuous parameters that is larger than the number of subatomic particles in the observable universe.
At this point in a description of a possible future technology, a hardheaded engineer loses interest. But let's continue. In any real-world computer, you have to consider the effects of errors. In a conventional computer, those arise when one or more transistors are switched off when they are supposed to be switched on, or vice versa. This unwanted occurrence can be dealt with using relatively simple error-correction methods, which make use of some level of redundancy built into the hardware.
In contrast, it's absolutely unimaginable how to keep errors under control for the 10300 continuous parameters that must be processed by a useful quantum computer. Yet quantum-computing theorists have succeeded in convincing the general public that this is feasible. Indeed, they claim that something called the threshold theorem proves it can be done. They point out that once the error per qubit per quantum gate is below a certain value, indefinitely long quantum computation becomes possible, at a cost of substantially increasing the number of qubits needed. With those extra qubits, they argue, you can handle errors by forming logical qubits using multiple physical qubits.
How many physical qubits would be required for each logical qubit? No one really knows, but estimates typically range from about 1,000 to 100,000. So the upshot is that a useful quantum computer now needs a million or more qubits. And the number of continuous parameters defining the state of this hypothetical quantum-computing machine—which was already more than astronomical with 1,000 qubits—now becomes even more ludicrous.
Even without considering these impossibly large numbers, it's sobering that no one has yet figured out how to combine many physical qubits into a smaller number of logical qubits that can compute something useful. And it's not like this hasn't long been a key goal.
In the early 2000s, at the request of the Advanced Research and Development Activity (a funding agency of the U.S. intelligence community that is now part of Intelligence Advanced Research Projects Activity), a team of distinguished experts in quantum information established a road map for quantum computing. It had a goal for 2012 that “requires on the order of 50 physical qubits" and “exercises multiple logical qubits through the full range of operations required for fault-tolerant [quantum computation] in order to perform a simple instance of a relevant quantum algorithm…." It's now the end of 2018, and that ability has still not been demonstrated.
The huge amount of scholarly literature that's been generated about quantum-computing is notably light on experimental studies describing actual hardware. The relatively few experiments that have been reported were extremely difficult to conduct, though, and must command respect and admiration.
The goal of such proof-of-principle experiments is to show the possibility of carrying out basic quantum operations and to demonstrate some elements of the quantum algorithms that have been devised. The number of qubits used for them is below 10, usually from 3 to 5. Apparently, going from 5 qubits to 50 (the goal set by the ARDA Experts Panel for the year 2012) presents experimental difficulties that are hard to overcome. Most probably they are related to the simple fact that 25 = 32, while 250 = 1,125,899,906,842,624.
By contrast, the theory of quantum computing does not appear to meet any substantial difficulties in dealing with millions of qubits. In studies of error rates, for example, various noise models are being considered. It has been proved (under certain assumptions) that errors generated by “local" noise can be corrected by carefully designed and very ingenious methods, involving, among other tricks, massive parallelism, with many thousands of gates applied simultaneously to different pairs of qubits and many thousands of measurements done simultaneously, too.
A decade and a half ago, ARDA's Experts Panel noted that “it has been established, under certain assumptions, that if a threshold precision per gate operation could be achieved, quantum error correction would allow a quantum computer to compute indefinitely." Here, the key words are “under certain assumptions." That panel of distinguished experts did not, however, address the question of whether these assumptions could ever be satisfied.
I argue that they can't. In the physical world, continuous quantities (be they voltages or the parameters defining quantum-mechanical wave functions) can be neither measured nor manipulated exactly. That is, no continuously variable quantity can be made to have an exact value, including zero. To a mathematician, this might sound absurd, but this is the unquestionable reality of the world we live in, as any engineer knows.
Sure, discrete quantities, like the number of students in a classroom or the number of transistors in the “on" state, can be known exactly. Not so for quantities that vary continuously. And this fact accounts for the great difference between a conventional digital computer and the hypothetical quantum computer.
Indeed, all of the assumptions that theorists make about the preparation of qubits into a given state, the operation of the quantum gates, the reliability of the measurements, and so forth, cannot be fulfilled exactly. They can only be approached with some limited precision. So, the real question is: What precision is required? With what exactitude must, say, the square root of 2 (an irrational number that enters into many of the relevant quantum operations) be experimentally realized? Should it be approximated as 1.41 or as 1.41421356237? Or is even more precision needed? There are no clear answers to these crucial questions.
While various strategies for building quantum computers are now being explored, an approach that many people consider the most promising, initially undertaken by the Canadian company D-Wave Systems and now being pursued by IBM, Google, Microsoft, and others, is based on using quantum systems of interconnected Josephson junctions cooled to very low temperatures (down to about 10 millikelvins).
The ultimate goal is to create a universal quantum computer, one that can beat conventional computers in factoring large numbers using Shor's algorithm, performing database searches by a similarly famous quantum-computing algorithm that Lov Grover developed at Bell Laboratories in 1996, and other specialized applications that are suitable for quantum computers.
On the hardware front, advanced research is under way, with a 49-qubit chip (Intel), a 50-qubit chip (IBM), and a 72-qubit chip (Google) having recently been fabricated and studied. The eventual outcome of this activity is not entirely clear, especially because these companies have not revealed the details of their work.
While I believe that such experimental research is beneficial and may lead to a better understanding of complicated quantum systems, I'm skeptical that these efforts will ever result in a practical quantum computer. Such a computer would have to be able to manipulate—on a microscopic level and with enormous precision—a physical system characterized by an unimaginably huge set of parameters, each of which can take on a continuous range of values. Could we ever learn to control the more than 10300 continuously variable parameters defining the quantum state of such a system?
My answer is simple. No, never.
I believe that, appearances to the contrary, the quantum computing fervor is nearing its end. That's because a few decades is the maximum lifetime of any big bubble in technology or science. After a certain period, too many unfulfilled promises have been made, and anyone who has been following the topic starts to get annoyed by further announcements of impending breakthroughs. What's more, by that time all the tenured faculty positions in the field are already occupied. The proponents have grown older and less zealous, while the younger generation seeks something completely new and more likely to succeed.
All these problems, as well as a few others I've not mentioned here, raise serious doubts about the future of quantum computing. There is a tremendous gap between the rudimentary but very hard experiments that have been carried out with a few qubits and the extremely developed quantum-computing theory, which relies on manipulating thousands to millions of qubits to calculate anything useful. That gap is not likely to be closed anytime soon.
To my mind, quantum-computing researchers should still heed an admonition that IBM physicist Rolf Landauer made decades ago when the field heated up for the first time. He urged proponents of quantum computing to include in their publications a disclaimer along these lines: “This scheme, like all other schemes for quantum computation, relies on speculative technology, does not in its current form take into account all possible sources of noise, unreliability and manufacturing error, and probably will not work."
Editor's note: A sentence in this article originally stated that concerns over required precision “were never even discussed." This sentence was changed on 30 November 2018 after some readers pointed out to the author instances in the literature that had considered these issues. The amended sentence now reads: “There are no clear answers to these crucial questions."
About the Author
Mikhail Dyakonov does research in theoretical physics at Charles Coulomb Laboratory at the University of Montpellier, in France. His name is attached to various physical phenomena, perhaps most famously Dyakonov surface waves.
posted on 2023-12-22 15:14 Angry_Panda 阅读(114) 评论(0) 编辑 收藏 举报
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】凌霞软件回馈社区,博客园 & 1Panel & Halo 联合会员上线
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】博客园社区专享云产品让利特惠,阿里云新客6.5折上折
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 一个费力不讨好的项目,让我损失了近一半的绩效!
· 清华大学推出第四讲使用 DeepSeek + DeepResearch 让科研像聊天一样简单!
· 实操Deepseek接入个人知识库
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· Plotly.NET 一个为 .NET 打造的强大开源交互式图表库