D-Wave Demo 2
The second big D-Wave demo was just a few days ago (November 13th, 2007). This time, they demonstrated some image recognition in conjunction with one of the world’s leading experts thereof, whose image recognition company was recently bought out by Google.
The chip demonstrated this time was a 28-qubit chip, but like on “Whose Line Is It Anyway?”, the points don’t necessarily matter (yet). I don’t know for certain the relevant numbers of this chip, but the relevant numbers of the 16-qubit chip in February (as Geordie presented on his blog back in March) were 6, being the largest number of variables for which any quadratic unconstrained binary optimisation problem of that size can be solved, and 42, being the number of tunable superconducting devices on the chip. In a nutshell, 6 bits of “useful” output and 42x bits of input (the usefulness of which is not as easy to determine, and x being the number of bits of precision). One might ask “Why only 6 useful bits of output when it’s a 16-qubit chip?” but this discrepancy is due to limits on the physics impacting the chip design in ways that I don’t fully understand. As a completely different example of this sort of discrepancy, the “7-qubit” joke made by some research group with NMR (i.e. by constructing a molecule and shooting lasers at it) has only 2 useful bits of output (i.e. the factor 3 in binary is 11) and 4 useful bits of input (i.e. the input number 15 in binary is 1111).
That sort of splitting hairs aside, I’m quite amazed at some of the poorly-written news articles this time around, with gems of illogic like this excerpt from The Guardian:
“Have commercial quantum computers finally arrived? … A Google scientist seems to hope so but, unfortunately, the answer is probably no. Dr Hartmut Neven, Google’s expert on image searching, was involved in a demonstration of quantum computing on Monday – even though most scientists are extremely doubtful that any real quantum computing took place.”
So…… is The Guardian implying that this guy who’s “Google’s expert on image searching” just got hit in the head repeatedly to the point where he believes whatever D-Wave told him? That makes NO SENSE! He’s one of the world’s top experts on image recognition. Give him a bit of credit! Regardless of what this pool of “most scientists” that seem not to have said anything but to The Guardian thinks of D-Wave, some world-class expert on something practical seems to think that D-Wave can help out with this practical application at which he’s an expert! I think any startup company that can get a commendation like that is probably bound for big success. Must…. stop…. ranting…. *sigh*
Anyway, there’s been a lot of confusion among non-experts about what problems D-Wave is and isn’t still facing. By problems, I mean things for which a solution is still unclear. Some of these things that aren’t problems as I’m defining the word here may still take a lot of work, but they have a potential solution known. I’d just like to clear up some of the confusion people have when asking me about D-Wave.
- Cooling a chip to 4 milliKelvin (0.004 degrees Celsius above absolute zero) is not a big problem for D-Wave, since refrigeration technology has come a long way in the past few decades and D-Wave has several experts on this subject.
- Chip fabrication is not a big problem for D-Wave (as far as I know). They could probably have a chip with 100,000 qubits made without much trouble; it just wouldn’t work without having tried chips of smaller sizes first.
- Reducing the size of the whole shebang to a manageable size (e.g. less than room-sized) is probably not a big problem for D-Wave. I don’t know the details of this, but I’m fairly certain that they’ve got it covered.
- Power consumption by the cooling system is not a problem. The cooling system may take a while to cool the chip, but once it’s there, as Geordie has said on his blog several times, very little power is needed.
- Controlling a chip from conventional computers isn’t a problem for D-Wave in one sense, and may or may not be in another (I don’t know the details). The actual communication is not a major problem at all. This sort of thing is, however, why there is a projected huge jump in the number of qubits from a small number like 16 or 28 to a big number like 512 or 1024. It’s not like they are just stating impressive figures for no reason.
- Making use of D-Wave’s system as a client-side programmer shouldn’t be too hard at all. The software team at D-Wave has a pretty good system in place to provide a few simple APIs for programmers (and possibly non-programmers) to make use of the system. You don’t need to understand transistors to program for a conventional computer, so you don’t need to understand qubits to program for a quantum computer.
- Scott Aaronson is not a problem for D-Wave. He is not a physicist of any sort, let alone a quantum physicist, and he doesn’t appear to be a particularly good computer scientist. Computer Science is an APPLIED science, and he seemingly wouldn’t know how to apply any computer science if his life depended on it. If he could produce ANYthing to actually make use of the pointless nonsense he spouts, then he’d be a computer scientist; until then he’s just a surly mathematician and a wannabe computer scientist. (I do think the quantum mechanical analysis of roast beef would’ve been awesome, though.)
- Chip design is probably not a problem for D-Wave, in that every chip-design-related issue that came up while I was at D-Wave got solved before I even knew that it was an issue related to chip design. Saying that they’ve got some good experts on this is an understatement.
- Funding may or may not be a problem for D-Wave. As Geordie (or Herb?) said in some quotation in some article I read, if they don’t get to around 512 qubits by the end of 2008, they could be in trouble.
- The actual quantum physics of it, I haven’t the slightest clue as to whether it’ll be a problem or not, because I don’t understand more than the “dumbed-down” versions of the quantum physics behind the chips. I do know (from some experimental results that will be presented at several universities shortly) that the chips aren’t acting based on classical physics, as Geordie has said on his blog a few times. That doesn’t necessarily mean that there will be speedup from the quantum physics, but it can’t hurt.
D-Wave, in one sense, is actually fairly safe. Supposing the quantum physics doesn’t scale up at all, they can still (much more easily, in fact) have a certain speedup simply from using superconducting components instead of semiconductor components, not to mention an even bigger improvement in computational power units per electrical power unit. Supposing even that fails, they should still have some of the best software in the world for solving these tough problems. If that fails, they’re out of luck, but unless that happens, I’ll be rooting for them, regardless of where I’m working. I may end up back there someday, or if I do a startup with Code Cortex, I may even end up indirectly helping them out.
Anyway, hats off to D-Wave for their second big demo, and I can’t wait to see any video/images of it online. Best of luck on the road ahead.