D-Wave Demo 2

The second big D-Wave demo was just a few days ago (November 13th, 2007). This time, they demonstrated some image recognition in conjunction with one of the world’s leading experts thereof, whose image recognition company was recently bought out by Google. Smile

The chip demonstrated this time was a 28-qubit chip, but like on “Whose Line Is It Anyway?”, the points don’t necessarily matter (yet). I don’t know for certain the relevant numbers of this chip, but the relevant numbers of the 16-qubit chip in February (as Geordie presented on his blog back in March) were 6, being the largest number of variables for which any quadratic unconstrained binary optimisation problem of that size can be solved, and 42, being the number of tunable superconducting devices on the chip. In a nutshell, 6 bits of “useful” output and 42x bits of input (the usefulness of which is not as easy to determine, and x being the number of bits of precision). One might ask “Why only 6 useful bits of output when it’s a 16-qubit chip?” but this discrepancy is due to limits on the physics impacting the chip design in ways that I don’t fully understand. As a completely different example of this sort of discrepancy, the “7-qubit” joke made by some research group with NMR (i.e. by constructing a molecule and shooting lasers at it) has only 2 useful bits of output (i.e. the factor 3 in binary is 11) and 4 useful bits of input (i.e. the input number 15 in binary is 1111).

That sort of splitting hairs aside, I’m quite amazed at some of the poorly-written news articles this time around, with gems of illogic like this excerpt from The Guardian:

“Have commercial quantum computers finally arrived? … A Google scientist seems to hope so but, unfortunately, the answer is probably no. Dr Hartmut Neven, Google’s expert on image searching, was involved in a demonstration of quantum computing on Monday – even though most scientists are extremely doubtful that any real quantum computing took place.”

So…… is The Guardian implying that this guy who’s “Google’s expert on image searching” just got hit in the head repeatedly to the point where he believes whatever D-Wave told him? That makes NO SENSE! He’s one of the world’s top experts on image recognition. Give him a bit of credit! Regardless of what this pool of “most scientists” that seem not to have said anything but to The Guardian thinks of D-Wave, some world-class expert on something practical seems to think that D-Wave can help out with this practical application at which he’s an expert! I think any startup company that can get a commendation like that is probably bound for big success. Must…. stop…. ranting…. *sigh*

Anyway, there’s been a lot of confusion among non-experts about what problems D-Wave is and isn’t still facing. By problems, I mean things for which a solution is still unclear. Some of these things that aren’t problems as I’m defining the word here may still take a lot of work, but they have a potential solution known. I’d just like to clear up some of the confusion people have when asking me about D-Wave.

  • Cooling a chip to 4 milliKelvin (0.004 degrees Celsius above absolute zero) is not a big problem for D-Wave, since refrigeration technology has come a long way in the past few decades and D-Wave has several experts on this subject.
  • Chip fabrication is not a big problem for D-Wave (as far as I know). They could probably have a chip with 100,000 qubits made without much trouble; it just wouldn’t work without having tried chips of smaller sizes first.
  • Reducing the size of the whole shebang to a manageable size (e.g. less than room-sized) is probably not a big problem for D-Wave. I don’t know the details of this, but I’m fairly certain that they’ve got it covered.
  • Power consumption by the cooling system is not a problem. The cooling system may take a while to cool the chip, but once it’s there, as Geordie has said on his blog several times, very little power is needed.
  • Controlling a chip from conventional computers isn’t a problem for D-Wave in one sense, and may or may not be in another (I don’t know the details). The actual communication is not a major problem at all. This sort of thing is, however, why there is a projected huge jump in the number of qubits from a small number like 16 or 28 to a big number like 512 or 1024. It’s not like they are just stating impressive figures for no reason.
  • Making use of D-Wave’s system as a client-side programmer shouldn’t be too hard at all. The software team at D-Wave has a pretty good system in place to provide a few simple APIs for programmers (and possibly non-programmers) to make use of the system. You don’t need to understand transistors to program for a conventional computer, so you don’t need to understand qubits to program for a quantum computer.
  • Scott Aaronson is not a problem for D-Wave. He is not a physicist of any sort, let alone a quantum physicist, and he doesn’t appear to be a particularly good computer scientist. Computer Science is an APPLIED science, and he seemingly wouldn’t know how to apply any computer science if his life depended on it. If he could produce ANYthing to actually make use of the pointless nonsense he spouts, then he’d be a computer scientist; until then he’s just a surly mathematician and a wannabe computer scientist. (I do think the quantum mechanical analysis of roast beef would’ve been awesome, though.)
  • Chip design is probably not a problem for D-Wave, in that every chip-design-related issue that came up while I was at D-Wave got solved before I even knew that it was an issue related to chip design. Saying that they’ve got some good experts on this is an understatement.
  • Funding may or may not be a problem for D-Wave. As Geordie (or Herb?) said in some quotation in some article I read, if they don’t get to around 512 qubits by the end of 2008, they could be in trouble.
  • The actual quantum physics of it, I haven’t the slightest clue as to whether it’ll be a problem or not, because I don’t understand more than the “dumbed-down” versions of the quantum physics behind the chips. I do know (from some experimental results that will be presented at several universities shortly) that the chips aren’t acting based on classical physics, as Geordie has said on his blog a few times. That doesn’t necessarily mean that there will be speedup from the quantum physics, but it can’t hurt.

D-Wave, in one sense, is actually fairly safe. Supposing the quantum physics doesn’t scale up at all, they can still (much more easily, in fact) have a certain speedup simply from using superconducting components instead of semiconductor components, not to mention an even bigger improvement in computational power units per electrical power unit. Supposing even that fails, they should still have some of the best software in the world for solving these tough problems. If that fails, they’re out of luck, but unless that happens, I’ll be rooting for them, regardless of where I’m working. I may end up back there someday, or if I do a startup with Code Cortex, I may even end up indirectly helping them out.

Anyway, hats off to D-Wave for their second big demo, and I can’t wait to see any video/images of it online. Best of luck on the road ahead.

Advertisements

~ by Neil Dickson on November 20, 2007.

13 Responses to “D-Wave Demo 2”

  1. Hey Neil, I was at Mohammad’s talk at MIT yesterday. It was rather interesting. I wouldn’t be too quick in discounting Scott Aaronson; he just happens to be the most vocal of a large group of academics who are skeptical, even after the talk. Anyway thanks for the insider info; I was wondering why there’s been so little press about the demo. Hope you’re doing well!

  2. Hi!
    I wouldn’t exactly call anything I posted “insider info”. I pretty much just took what’s already been posted by Geordie across many months and summarized it. Plus, I don’t know a lot of the details, since I’m back at school this term.

    As far as the “Aaronson et al.” issue is concerned, I see it like the String Theory people dumping on the guy who recently came up with a simple new “theory of everything” based on E8. Maybe the guy’s completely wrong. In fact, from what I’ve heard, it’s pretty likely that he’s wrong. At least he’s got a CHANCE of being right, though, since his conjectures are at least testable, whereas String Theory isn’t. Science is based on experimentation, but the “pure academics” don’t seem to think so, so they dump on D-Wave for taking an experimentation-based approach. It takes time to get data good enough to be published (whatever that means these days), but it works.

    Anyway, It’ll be really cool when they post some more stuff from the demo and Mohammad’s talk. ๐Ÿ™‚

  3. Er yes, I meant to take out the “insider info” phrase but was in a hurry when I posted. And by “discount” I mean “discredit”. Posting comments just before rushing to work was not a good idea …

    Anyway, now that I’ve been on the academic side of things too, I think there are valuable objections from both sides. I’m just going to take a spectator’s stance from now on and see how it goes.

  4. NMR (i.e. by constructing a molecule and shooting lasers at it)

    NMR spectrometer shooting lasers at a molecule? Please, read arXiv:quant-ph/0112176

  5. You seem to misunderstand how most quantum computer technologies work. You don’t have to have specific input bits and output bits. Qubits can do double duty as input and output. The only restriction that we have is that the computation (prior to measurement) be unitary.

    As nextquant mentions, NMR doesn’t use lasers, but relies instead on radiowaves. Perhaps you are thinking of ODMR or maybe electron spin techniques?

    Also, I should mention that I find your characterization of Scott to extremely inaccurate. He works on quantum computing, and the fact that he is a theoretical computer scientist as opposed to a physicist surely is a bonus when talking about quantum complexity. Also, it is worth noting that while he may be the most vocal source of scepticism about DWave, he is by no means alone in that view, within the QIP community.

  6. @nextquant: “lasers” is a term for the layman. That article states that they used RF pulses, which is close enough to “lasers” for most people, despite that it’s horribly inaccurate to someone who actually understands NMR spectrocopy. I don’t know more than what Wikipedia says on NMR spectrocopy, so I don’t know much about it myself. I’m fairly certain, though, that it would take a pointlessly long amount of time to explain the exact functioning of RF to someone like my mother, when all I’m really trying to imply is that they used a very complicated method to do something humorously simple. Similarly, I could complain about other people referring to problems being NP-complete, when often they are really (FP^NP)-complete, which if P!=NP, are quite different classes, but unless you actually deal with the difference between the two on a regular basis, it doesn’t really matter.

    @Joe: You seem to misunderstand how REAL computer technologies work. Sorry to throw your own insult back at you, but you really seem not to understand what I was talking about. I didn’t say anything about “input bits” or “output bits”. I said “bits of input” and “bits of output”, which are very different from the former. The amount and meaning of the input determines the range of problems that could be solved by a blackbox system, and the amount and meaning of output determines the range of solutions that could be produced.

    On the chip D-Wave presented in February, they only counted the 16 output qubits as qubits, however each of them had at least 2 bits of analog input applied to it, plus the couplers were also superconducting loops (i.e. they could also be considered qubits in this case, but with just z, instead of x and z) each having some amount of input applied. However, one could consider the relevant number of bits of output to be 6 (since each of 6 nodes in an input graph could be in or out of the Max. Independent Set), and the relevant number of bits of input to be 15 (since there are 15 possible edges in a 6-node undirected graph). That’s basically what I was saying in my post.

    My characterisation of Scott Aaronson may be overly harsh and rude, but not so inaccurate. The most significant problems in progress of quantum computing right now have nothing to do with computer science, they have to do with quantum physics. As such, a theoretical computer scientist with an ego as large as Scott’s is nothing but detrimental to quantum computing efforts. I realise that many QIP researchers are sceptical of D-Wave, but myself, I’m much more sceptical of QIP researchers, since they have almost no motivation to make progress, only the motivation to claim that they will make progress, in order to get funding. D-Wave must make real progress quickly or it will go bankrupt, whereas Scott Aaronson never needs to make any progress to stay afloat. To put it bluntly, if D-Wave is lying, they’ll never make any money.

  7. Neil,

    I realise that this is your blog, and I don’t want you to think I’m goading you. There are however a number of points that I feel I should raise.

    I’m well aware of how current computer technology works, and how various quantum architectures work.

    You are not using a consistent metric in comparing the number of bits of input. You consider only the initial qubit state for the demonstration of Shor’s algorithm, where as for the DWave machine you are counting all controls. For the factoring algorithm, you have completely neglected to count the control fields. The computer is universal, not just a factoring machine. by varying the instructions it is possible to perform an enormous number of 7 qubit unitaries (assuming finite accuracy). For Orion, you are counting the all the control parameters. Using the same metric for each, the seven qubit machine wins, as it is universal.

    The reasoning is as follows: There are 2^(2N)-1 independent terms in a unitary operation on N qubits, each having p bits of precision. so you essentially need (4^N)p bits to describe an arbitrary unitary. For 7 qubits this would be 16384*p classical bits required to describe an arbitrary unitary. Of course the lack of fault-tolerance is an issue, but then again it applies to Orion also.

    You are severly off the mark as regards the relevance of theory to quantum computing research. Algorithms are hugely important, as are quantum communications protocols, the simulability of various quantum systems on classical computers, fault-tolerance, MBQC, etc. There is a reason why there are so many theorists in the area. With that in mind, you may want to check the number of citations for Scott’s papers before you write of his work. The very first one listed in Google scholar has been cited 83 times. That’s a lot by anyones standards.

    As regards quantum computing researchers not wanting quantum computers, you are severely misinformed. Virtually everyone wants to see practical devices. Computers have existed for 60+ years, and there is still a lot to be learnt from theoretical computer science. And all us physicists would probably turn our attention to quantum gravity, condensed matter, other AMO topics or other open areas if there was nothing left to be done.
    You could make the same arguement about any branch of science, and you would be equally wrong. There is a reason why our knowledge of nature increases at such a dramatic rate.

    And as regards DWave, I have know idea what plan B is should they fail, but as they are becoming something of a patent factory they could decide to pull a SCO. I don’t know one way or the other, and so I don’t accept your arguement that if they fail they are doomed.

  8. Oops, I left out the -1 after 4^N in some of that, but it makes virtually no difference.

  9. You seem to once again have missed my points. I think theory is VERY important to quantum computers. I think that computer science theory is currently almost irrelevant to quantum computers because the physics theory has been neglected so badly in this regard that nobody except D-Wave seems to know how to move forward with real quantum computing efforts. It is a huge mistake to assume that the physics of a quantum computer is a system where you can make any absurd constructs you want, as the computer science theorists have been. I’ve seen papers claiming to obtain fault-tolerance by constructing 6-local physical Hamiltonians with arbitrary order of x’s and z’s. From what I’ve seen, it’s hard enough to make 2-local Hamiltonians work well when you’re limited to just x, z, and zz, and even then it might not scale up. The physics is the current bottleneck, and not many seem to be addressing it; there’s not even a decent model for what “realistic” noise in the system is.

    As for the “universality” of the custom molecule presented in the paper, they state that the qubits only “interact pairwise” as such, you can only have up to (N(N-1)/2)p different operations. (7(7-1)/2)p = 21p. Plus they state that they don’t even use every possible pairwise interaction. “universal” on N qubits doesn’t mean that you can represent every N qubit unitary, it means that you can map any unitary of some number of interactions onto a number of qubits that is polynomial in the number of interactions and with at most a polynomial slowdown. Besides that, if all subsets of qubits needed to have separate interactions, the input would need to be exponentially large for even NP-complete problems, which would then defeat the purpose of any potential speedup. You seem to be confusing this with the amount of space required to simulate the (ideal) quantum system, which indeed is exactly as large as you describe (and as stated in the paper).

    Even ignoring that, I’m quite consistent in my “number of relevant bits of input/output”. How many meaningful problems can be expressed solely in terms of that molecule? The only one I know of has only 4 relevant bits of input, namely the number 15, and they state that the algorithm fails for an even number and powers of primes, so it’s generous that I’m even counting it as 4 bits of input. Please inform me of anything else you can think of that they could’ve done with it. D-Wave’s February chip could represent more than 32768 (2^15) maximum independent set problems, with each graph edge being a relevant bit of input, (I don’t know how many of those it can solve, but same goes for any other problems you might think of for the molecule.)

    As far as researchers wanting quantum computers so badly, it looks like they haven’t exactly made much progress in the past 26 years since Feynmann’s famous talk. Plus, if researchers really cared about making something useful, they wouldn’t have wasted so much time on something useless like factoring. I know this sounds blunt (and yet way too metaphorical), but at some point a kid taking piano lessons needs to decide whether they will become a concert pianist or try something different to contribute to the world. A great speaker (I don’t remember his name unfortunately) once said to me “If you can lead, you have the moral obligation to lead!” and quite frankly, he’s right in a more general sense than that. If you can be contributing to the world (like helping to create a useful quantum computer), please don’t spend all your time dancing around the issue, do something about it! From the Timeline of Quantum Computing on Wikipedia, it’s so scattered that you’d think that the researchers aren’t even talking with each other.

  10. whoops, by “(N(N-1)/2)p different operations” I meant “(N(N-1)/2)p bits with which to represent different operations”, but the rest still holds, except possibly my overly colloquial definition of “universal” ๐Ÿ™‚

  11. I’m curious as to what you think of the biggest criticism that the academics have about D-wave, which is: “does the 28-qubit Dwave quantum computer actually quantum compute the solution to MCS, or is it just classical annealing?” (stolen from a commenter on Aaronson’s blog.)

    To my understanding D-wave hasn’t answer this question yet, even at the talk. I suppose the MRT experiment was an attempt to address this, but now the next question is that the MRT was done with just 1-2 qubits and they aren’t convinced that it easily extracts to 28 qubits.

    Anyway, I must say your strongly pro-Dwave perspective is interesting and worth publishing. There aren’t many of you on the blogosphere ๐Ÿ˜› (my own affiliation notwithstanding.)

  12. I suppose it depends on what is meant by “classical annealing”. If you mean “performs with the same scaling as classical annealing”, it’s my understanding that if they’re lucky it might happen to achieve the SAME scaling as classical annealing. The slides were mostly on Global Adiabatic Evolution, not Local Adiabatic Evolution or Quantum Annealing, and Global Adiabatic Evolution has no speedup (at least with the method that they presented) over classical annealing except possibly a constant factor if it’s not all lost in overhead. Local Adiabatic Evolution is very dependent on decoherence times and Quantum Annealing still seems not-very-well-examined to me. There’s a book out on Quantum Annealing, but the methods and results presented in the book seem rather sketchy.

    If you mean “operates completely by classical physics while performing annealing”, as far as I know, that’s not true on both counts. The results shown don’t seem to correlate with the classical physics model (though I’m not an expert on it), and quantum annealing isn’t really discussed until the “Additional Slides” part of it. The word “annealing” appears on a slide or two just before the conclusion slide, but they’re definitely not trying to show results on quantum annealing in the slides. The mention of classical annealing makes it look a bit less clear, but they’re really not talking about doing annealing for the most part.

    As far as the extrapolation to 28-qubits, I have no clue, but I do know that accurate simulation for 28-qubits is probably completely infeasible. It took enough computation for accurate simulation of a couple qubits.

    On the “pro-Dwave” comment, I think that there’s a lot to be said for D-Wave, and a lot to be said against it, but by far they’ve got the best chances of anyone at making a quantum computer in the next 5 years. Independent researchers certainly aren’t going to do it. There’s a whole lot more to making a useful quantum computer than working out some equations on paper, and that’s been overlooked by many. Heck, some researchers even criticize that D-Wave is trying to actually do something without working out all the answers ahead of time. If someone worked all the answers out today, it’d have been done years ago.

    I don’t write software by designing every function on paper and proving that it’s bug free before I write a line of code (though humorously enough I’ve seen a few people try and fail at doing this); I plan it in general and then start writing and testing as I go. I actually dislike that so few Comp-Sci professors are involved with any sort of software development. So many theoretical conclusions don’t make sense at all when put into practise (like Strassen’s Algorithm in the case of dense matrices). Also, I don’t write papers until I’ve got some big results to show for my work (unless I really have to, as in the case of the PwnOS design document to be released in a few weeks). If I was D-Wave, I’d probably wait until 1024 qubits before writing major papers, but criticism from people writing papers on 1-qubit and 2-qubit systems is forcing them to divert course to spend lots of time justifying their work. Maybe they’ve overhyped things more than I would have, and maybe they haven’t, but I’m not in business (yet).

    One of the most common questions I get from my fellow Comp-Sci students is “How on earth do you program a quantum computer?” and that’s an issue that D-Wave has really addressed better than anyone else. No programmer is going to make classical Turing machines to get something done let alone quantum Turing machines. They’re not going to make classical circuits, so why on earth should they be forced to make quantum circuits? The issue is that people hear “quantum computer” and they expect that, for instance, their whole laptop has to be quantum for it to count, and they have to interface with it in a quantum way, even though that makes no sense for most tasks.

    In a nutshell, I’m just sick of people who’ve never tried to help build a useful quantum computer, criticizing D-Wave for not doing it the way they would have. Maybe D-Wave isn’t doing it in a good way, and maybe they are, but at least they’re doing something.

  13. whoops, another slight omission: I meant “but by far theyโ€™ve got the best chances of anyone at making a *useful* quantum computer in the next 5 years.” I don’t like always having to prepend the word “useful” on quantum computer, but I also don’t like how many people seem to have missed that point elsewhere. If you’ve made something that isn’t well on the road to being useful, it probably isn’t worth much serious discussion (like a molecule that factors 15 in about the same amount of time it takes an 8-year-old to factor 15; 720ms for those who don’t have time to get through the paper).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: