## Possible Downtime and More Random Math

•August 22, 2009 • 1 Comment

http://www.neildickson.com and http://www.codecortex.com may or may not be down on Sunday or Monday.  If they are, I blame Scotiabank for:

• needing 6-8 business days to make a simple transaction that can be made in less than half a second because of two little things I like to call: the internet and the 21st century
• sending two versions of a credit card within a week or two where activating the wrong one makes them both not work
• needing over a week to make and send a new credit card instead of just creating one on the spot in about 30 seconds as they easily could (just need a punch for the numbers and a magnetic strip writer)
• charging me an outrageous $54 annual fee for the privilege of carrying a piece of plastic around that didn’t work anyway • charing a record high 30% interest rate on the credit card even though I’ve never missed a payment • paying a record low 0.25% interest rate on my savings account, and even then only on amounts over$5,000; less than 1/3 of what RBC pays, and less than 1/4 of what HSBC pays
• working only 9:30am-4:00pm to avoid dealing with customers who have jobs

Anyway, hopefully the sites don’t go down, and here’s some more random math-ish stuff.  Suppose you have a function in terms of its Taylor series about zero (it’s Maclauren series):

$f(x) = \displaystyle\sum_{i=0}^{\infty} a_i x^i$

How can you find the square of the function?  In hind sight, it should have been obvious, but it’s:

$(f(x))^2 = \displaystyle\sum_{i=0}^{\infty} \left(\displaystyle\sum_{j=0}^{i} a_j a_{i-j}\right) x^i$

In fact, more generally, if we also have $g(x) = \sum_{i=0}^{\infty} b_i x^i$:

$f(x)g(x) = \displaystyle\sum_{i=0}^{\infty} \left(\displaystyle\sum_{j=0}^{i} a_j b_{i-j}\right) x^i$

You can also calculate $\displaystyle\frac{f(x)}{g(x)}$ fairly simply:

$\displaystyle\frac{f(x)}{g(x)} = \frac{\displaystyle\sum_{i=0}^{\infty} a_i x^i}{\displaystyle\sum_{i=0}^{\infty} b_i x^i} = \displaystyle\sum_{i=0}^{\infty} c_i x^i$
$\displaystyle\sum_{i=0}^{\infty} a_i x^i = \left(\displaystyle\sum_{i=0}^{\infty} b_i x^i\right)\left(\displaystyle\sum_{i=0}^{\infty} c_i x^i\right)$

$a_0 = b_0 c_0$
$a_1 = b_1 c_0 + b_0 c_1$
$a_2 = b_2 c_0 + b_1 c_1 + b_0 c_2$
$a_3 = b_3 c_0 + b_2 c_1 + b_1 c_2 + b_0 c_3$

$\cdots$

$\begin{bmatrix} b_0 & 0 & 0 & 0 & \cdots \\ b_1 & b_0 & 0 & 0 & \cdots \\ b_2 & b_1 & b_0 & 0 & \cdots \\ b_3 & b_2 & b_1 & b_0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \\ c_2 \\ c_3 \\ \vdots \end{bmatrix} = \begin{bmatrix} a_0 \\ a_1 \\ a_2 \\ a_3 \\ \vdots \end{bmatrix}$

Simply use forward substitution to solve for the values of $c_i$.

It’s much harder to work out what non-integer powers of the Taylor series are, though.  The pattern for $\sqrt{f(x)}$ seems too complex to figure out from 5 terms.  What’d be really cool is to get the series for $f(g(x))$, since then most anything else would be pretty easy to figure out.

## The Harmonic Series and Beyond

I was going to write a post about AQUA@Home, the cool quantum simulation system you can help with that I’ve been working on at D-Wave, but I’ll leave that for a couple of days and talk about one of the miscellaneous things that I’ve thought about at random.

I’ve known for quite some time that $\sum_{i=1}^n \frac{1}{i}$ (the harmonic series) divergesas n approaches infinity, and more recently, that it scales as $\Theta(\log n)$.  However, $\sum_{i=1}^n \frac{1}{i^2}$ converges, along with any other positive function $O(\frac{1}{i^2})$ being summed.  For a while I had assumed that any function $o(\frac{1}{i})$ would converge, but I learned in my second calculus course at Carleton that $\sum_{i=2}^n \frac{1}{i \log i}$ also diverges.

Several weeks ago, I was curious where the true bound between converging and diverging is, and I seem to have found it.

I started by finding a simple proof of the scaling of the harmonic series, and it goes something like the following.  Let’s assume that n is a power of two:

$\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10}+...+\frac{1}{n}$
$>\frac{1}{1}+\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{16}+\frac{1}{16}+...+\frac{1}{n}$
$=\frac{1}{1}+\frac{1}{2}+2(\frac{1}{4})+4(\frac{1}{8})+8(\frac{1}{16})+...+\frac{n}{2}(\frac{1}{n})$
$= 1 + \sum_{j=1}^{\log n} \frac{1}{2}$
$= 1 + \frac{1}{2}\log n$

and:

$\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10}+...+\frac{1}{n}$
$<\frac{1}{1}+\frac{1}{2}+\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+...+\frac{1}{n/2}+\frac{1}{n}$
$=\frac{1}{1}+2(\frac{1}{2})+4(\frac{1}{4})+8(\frac{1}{8})+16(\frac{1}{16})+...+\frac{n}{2}(\frac{1}{n/2})+\frac{1}{n}$
$=\frac{1}{n} + \sum_{j=0}^{\log n - 1} 1$
$= \log n + \frac{1}{n}$

Therefore, $\sum_{i=1}^n \frac{1}{i} \in \Theta(\log n)$.  Using a similar approach, one can show that $\sum_{i=2}^n \frac{1}{i \log i} \in \Theta(\log \log n)$.  Doing this again can show that $\sum_{i=4}^n \frac{1}{i \log i \log \log i} \in \Theta(\log \log \log n)$, and a pattern emerges allowing each one to be expressed in terms of the previous.

The final conclusion:

$\sum_{i=2^k}^n \frac{1}{i \log i \log \log i \log \log \log i ... \log...\log i} \in \Theta(\log...\log n)$

where the number of logs in the last factor and in the scaling is k.  Note that the $2^k$ is just to prevent any of the logarithms from going to zero, so assume $n >> 2^k$.  If any one of the logarithmic factors in the sum has a power greater than one, the series converges, but with them all equal to or less than one, the series still diverges.  In effect, the scaling can get arbitrarily close to $\Theta(1)$, so this appears to be the very brink of divergence.  Some may argue that having a finite k makes it insufficient, since there are theoretically slower-growing functions, but that’s really splitting hairs.

## Inventor Performance Viewer

•July 9, 2009 • 3 Comments

I recently put off the release of Inventor IDE Alpha 5b (probably coming in August, since my honours project is highest priority now) to get something ready for DemoCamp Vancouver that I’ve been wanting for a very long time.  It’s a feature that probably won’t be released until Alpha 6, but you can be certain that I’ll try to make good use of it at D-Wave as soon as I can.  I consider it the first truly awesome feature of Inventor IDE.  It’s easiest to introduce with a picture:

Rock-solid performance data literally in the blink of an eye

(In case you’re wondering, the times are in clock cycles on a 1.66GHz laptop, so 100,000 clocks = 60 microseconds.)

A big problem in doing performance analysis is that getting reliable results and useful scaling info usually means tests that take minutes, so that the impact of the OS or other applications levels off to a constant factor.  If you try to do short tests, you get issues like these:

Bad Performance Data for 2 Mersenne Twister Implementations

Bad Performance Data for 3 Sorting Implementations

These aren’t exactly reliable results; running a short test multiple times produces wildly different results because of just a few bad data.  On Tuesday around 4am (Pacific Daylight  Time), I finally managed to find a decent approach to eliminate almost all bad data without eliminating much good data.  It’s certainly not perfect, but here are the results of running the two Mersenne Twister tests thrice each while AQUA@Home is trying to max out the CPU and the harddrive is thrashing like mad (for unknown reasons):

Much Better Mersenne Twister Performance Data

It’s worth pointing out that these times are in fact noticeably worse than the results when the CPU and harddrive aren’t under heavy load elsewhere (about 1.5 times as long).  The important thing is that the results are still remarkably consistent, and Mersenne128 is still 3.8 times faster than Mersenne32.  You can even still roughly see that Mersenne32 regenerates its data array every 624 elements and Mersenne128 regenerates every 2496 elements.  In case you’re wondering why CPU times would be so consistently longer instead of scattered, it has to do with process switches flushing the cache and TLBs, only causing cache misses in the good data, whereas the bad data still contain any process switches and other interrupts (like the 1kHz tick and page faults).

I’m still shocked that I can effectively triple-click the test run button on the performance test functions and have the results show up as fast as Windows can show the windows.  It’s even more responsive than the regular run button in the ImageProgram sample app, and that had shocked me.  That said, I found the ironic-but-not-so-important problem that it names the output files with the time only down to the second, so one that ran in the same second as another wrote over the first’s data file.

This kind of capability finally allows developers to rapidly iterate through possible performance improvements, not only making it faster to optimize, but much easier, because there’s time to try out many more optimizations in the same amount of time.  For example the faster mergesort in the top graph is from adding a special case for n=2.  In that way, it can also be used as a learning tool for developers to determine what does and doesn’t improve performance.  Best of all, because of the performance viewer’s language-independence, its platform-independence, and the many useful analyses it could do on the data, it’s applicable to many different situations; probably many I won’t think of myself, so I leave that up to you.  I’ve just shown you the first glimpse into what I’ve got planned for it.  🙂

## Bugs that never see the light of day

Today, I fixed a ton of bugs in Inventor IDE and added some simple-but-useful functionality I’d been putting off for a long time.  It occurred to me earlier today that for almost all of these bugs, I’m probably the only one who’s ever encountered them, and some of them, even I never encountered them; they’re fixed without ever being experienced.  I thought I’d share some of these bugs with you to give an idea of what I’ve been up to and what sort of bugs I encounter in Inventor IDE.

# Bug #1: Huge JComboBox

This is one of those bugs that I can’t actually fix, but I managed to find a workaround after a whole lot of banging my head against the wall.  There’s at least one bug in Java’s implementation of JComboBox that uses the Windows look-and-feel.  It may have been fixed in Java 1.6 or later, but I have to build with Java 1.5 or nothing will run when somebody tries to run it with Java 1.5, even though I’m not using any Java 1.6 features.

The bug: suddenly and irregularly, the size of a JComboBox will become larger than 16384 x 16384.  When Java tries to create a buffer into which to paint the JComboBox, it runs out of memory, because 16384 x 16384 x 4 bytes per pixel = 1GB.

It took a few hours to figure this out, because the stack trace for the OutOfMemoryError doesn’t contain any of my code (since I hadn’t overridden JComboBox.paintComponent()) and the bug rarely occurs.  The way I eventually figured it out was that I put a breakpoint at one of the low levels in the stack trace, ran the code until I got a debugger trace that looked similar, and looked back at the object up the chain at the last pair of paint()/paintChildren() calls, and it turned out to be an ObservedStringComboBox (one of my classes).  It wasn’t for a few minutes that I realized the size was so huge, and I discovered that getPreferredSize() and getMinimumSize() return similarly huge sizes too, so I traced into them.  After an hour of crawling through the mess that is Java’s Swing, I found that it was probably that some object was giving a dummy value to indicate that it didn’t really know how large something was supposed to be.

The solution: I’ve overridden JComponent.paintComponent() in both classes I have that extend JComboBox, and they check whether the width or height is over 16384, in which case, they do a rough estimation of the size based on the string widths of their options.  Then, they call setSize(), setMinimumSize(), setPreferredSize(), and setMaximumSize(), just to make sure, before calling the original paintComponent().  It successfully picked up the error when it occurred, and set the JComboBox to a reasonable size that didn’t look out of place.

# Bug #2: Where’s the caret?

This bug has been around for quite some time, and although I was stumped on a good way to solve it (partly due to the next bug), it’s solution was a one-character change.

The bug:

The caret is currently at the beginning of the first “SegmentDescriptor”, but it’s impossible to see.

The border of the LineFields blocks out the one pixel-width of the caret that would be visible if it had no border.

The solution:

The exact same thing, but everything in a LineField is drawn 2 pixels to the right

This was literally solved by changing the insets from (0,0,0,0) to (0,2,0,0).  Now the full caret is visible.  (It didn’t look as good with just one pixel-width showing, so I have both pixels showing.)

# Bug #3: The layout war

I hate layout issues; Java seems to have a neverending supply of them, which is why I removed LayoutManagers from a vast majority of the components that need good layout.  There were a few major layout bugs that I still had though, and the most ridiculous-looking was to do with the layout of my ObservedStringTables (shown above correctly, but I’ll use a new example).

The bug: the ObservedStringTable should control the layout of the LineFields it contains, since they should be kept looking tabular, but many LineFields elsewhere are supposed to resize automatically to fit the current size of their text, so that behaviour was built into LineField.

An enumeration isn’t supposed to look this bad (btw, here, it auto-increments by “sizeof SegmentDescriptor” if no value specified)

The solution: it’s not a great solution, but by the time it’s lack of greatness would matter, it’ll have changed or need changing for other reasons already.  I just have LineField check whether it’s in an ObservedStringTable, and if it is, it doesn’t change its own size.

Finally, it looks like a table

In case you’re wondering, yes, the caret is now not visible if it’s at the end of the Increment field.  I’ll fix it now, ’cause I’ll forget something small like that… there we go; insets are now (0,2,0,2).  🙂

# Bug #4: When the impossible is possible

Moving away from UI-related bugs, this bug was technically fixed today, though only because today started at midnight, not when I got up (11:30 AM).  A few days ago, I added support for compiling inline functions that have only text-replacement parameters (i.e. macros minus support for elaborate macro stuff).  That way, I can use it for having debug output functions in Own OS that don’t get called at all if “TEXT_TESTING_MODE” is false.  However, fairly often when compiling, though unpredictably, there’d be an ArrayIndexOutOfBoundsException when adding a reference to a line or when adding an error to a line.

It’s fairly common to add errors to lines during compilation, since the error detection doesn’t check for everything before compile-time, but the only code that would be adding references to lines would be the code for inline function compiling.  When I encounter a call to an inline function, I copy the function, replace the parameter references in the code with their given values, and remove the parameters.  Then I compile it as a regular function, just in the middle of another function.  The weird thing about this error is that upon adding a check to make sure that the array was large enough (which was even redundant, because the line before had been a call to ensureRefCapacity(), which expands the reference array if needed), it still had the same array index exception.  The only way I could think of that this could happen is if multiple threads were messing around with the same line at the same time, but the odds of that were pretty low, considering I’d explicitly designed the compilation not to modify anything concurrently.

The bug: when a parameter is removed from a function that is in global scope, things observing the global scope get notified about it.  A temporary copy of an inline function shouldn’t really be notifying the global scope about its parameters being removed, and updating code based on a notification is probably not something that should occur while compiling.  Sure enough, functions didn’t like getting told that a temporary function they can’t actually access has lost a parameter.

The solution: this is one bug where I haven’t actually determined exactly how/why it was occurring, but the stack traces and stepping through debugging clearly showed that the error was only occurring when a temporary function was broadcasting to everybody that it had lost a parameter, which it shouldn’t be doing.  I just stopped temporary functions from notifying the global scope of anything.  It’s usually not advisable to fix something without fully understanding it, but often, such a restriction would mean that few bugs would get fixed.  For now, I’m okay with this lack of understanding, because I’ll need to re-investigate and adjust this type of thing when I add support for function overloading when I add support for C, and until then, it’s unlikely that any oversight here will rear its head again.

# Yesterday’s Bug: The complete round-off problem

So this wasn’t from today, but it’s a cool quirk of floating-point calculation with which programmers should familiarize themselves.  Suppose that you have an array of floats (4-byte real numbers) that’s 2 billion elements long, and every single element contains the value 1.  If you simply go through the array and sum up the values of the elements, what is the final sum?

You might guess that it’s 2 billion plus or minus some relatively small amount of error, but in fact, the sum is exactly 8,388,608.  This is the case whether you have 10 million elements, 2 billion elements, or 1 trillion elements.  The answer is still 8,388,608.  Why?

floats have 23 bits of mantissa, and once they get to the value 8,388,608 (i.e. 1.0000…0×2^23), adding the value 1 would make it 8,388,609, but this can’t be represented exactly in floating point, so it gets rounded down (following the round even rule), back to 8,388,608.  This happens for every single 1 that is added after that too.  In general, when adding many values that are all positive (or all negative) of similar magnitude, the final sum will be around 8,388,608 times the average value.

This is exactly what I found in a piece of code I had changed to use floats instead of doubles at work (at D-Wave).  It was trying to add 14,400,000 elements with roughly the value 1, and it gave 8,388,608 every time.  Instead of reading the sum, I was actually reading the average, 0.582542 instead of about 1.0, so it wasn’t immediately apparent why it was wrong.  I realized that this was the error when I ran it for 144,000,000 elements and got 0.058254, which is 10 times less for 10 times as many elements, meaning that the sum hadn’t changed.  That brings up the question, how do you add up the elements in an array to avoid this?

The solution: Well, the way I fixed it at work was to just change it back to using a double (8-byte real number) to accumulate the values.  That doesn’t encounter the complete round-off problem until 2^52 additions; it can still result in bad round-off error before then, but it shouldn’t be an issue for the piece of code we were doing this in.  Is there a way that works for any number of additions?

One way that works in O(logn) extra space (or inline if you can destroy the data), while still being in O(n) time, is that you break it up by adding every pair, then adding every pair of those, then adding every pair of those, etc.  The way that can be done in O(logn) space is that you keep a value for every level, all starting at zero, and when you’re done a level, you add the value to the next level up.  If that was the end of the next level up, it goes up more, else you go back to the next at the bottom.  This is equivalent to splitting it up recursively (using O(logn) stack space), only without the disadvantages of recursion.  It also has the advantage that it could be done with data coming in over time.  Note that there’s still the issue of round-off from adding positive and negative numbers that sum to near zero, but there’s not much that can be done about that.

Edit: Here’s code for this approach (untested):

float sum(const float*const array,const size_t size) {
// Assuming there is a function to compute the ceiling of log_2 of size
// The -1 is because the lowest level (pairs) gets added immediately instead of waiting
const size_t logSize = ceilLog2(size)-1;
float accumulators[logSize] = {0};
bool isSecondHalf[logSize] = {0};
float sum;
for (size_t i=0; i<size-1; i+=2) {
// Add together a pair of elements
sum = array[i];
sum += array[i+1];
size_t level = 0;
// Go up the tree until reaching the end
// or a place where it's the first half
while (isSecondHalf[level]) {
isSecondHalf[level] = 0;
accumulators[level] = 0;
sum += accumulators[level+1]
++level;
// Handle case of size being an exact power of two
if (level==logSize) {
return sum;
}
}
// Save the sum at the highest level reached
accumulators[level] = sum;
// Now on the second half of that level
isSecondHalf[level] = 1;
}
// Make sure to get the last element if there were an odd number of elements
sum = (size&1) ? array[size-1] : 0;
// Add up all values left in accumulators (order matters)
for (size_t level=0; level<logSize-1; ++level) {
sum += accumulators[level];
}
return sum;
}

## Own OS: First Successful Output

•June 7, 2009 • 2 Comments

This may seem like a small accomplishment to many, but I now have Own OS boot code that takes one CPU core from 16-bit mode to 32-bit mode to 64-bit mode using code developed and assembled/linked with Inventor IDE.  Until now, so far as I’ve been able to find, no linker has supported 16-bit, 32-bit, and 64-bit code smoothly linked together in one output file.  If that’s the case, Inventor IDE is the first linker to support this, and it makes developing an operating system bootloader an absolute breeze compared with the alternative, which is to:

1. The 16-bit Master Boot Record (MBR) or Boot Sector (BS) loads the rest of the 16-bit bootloader.
2. The 16-bit bootloader finds and reads in a file on the drive containing the 32-bit bootloader with an entry point at a hard-coded address.
4. The 32-bit bootloader finds and reads in a file on the drive containing the 64-bit bootloader with an entry point at a hard-coded address.

Now it’s as simple as:

1. The 16-bit Master Boot Record (MBR) or Boot Sector (BS) load the entire rest of the bootloader.
3. Switch to 64-bit mode and jump to the 64-bit starting function (ditto)
4. The 64-bit function actually starts loading the OS after starting the other CPU cores.

Eliminating the need to find and read in files from a filesystem reduces the size and complexity of the code many-fold.  Plus, now the entire OS should easily fit in the 30KB of disk space I’ve got for the bootloader, so that first step will load the entire OS from the drive.  I would show a screenshot or photo, but it doesn’t write to the harddrive yet (it’s probably for the best), and I don’t have a working camera anymore.  Nonetheless, here’s roughly (just in a different font) what was output:

Output showing the Own OS bootloader switching from 16-bit mode to 32-bit mode to 64-bit mode

There have been similar linkers for just 16-bit and 32-bit code, most notably JLOC.  I would link to the original site, but the site vanished many years ago, which explains why there’s no support for 64-bit code.  There’s also the standard linker for linux, LD.  Although it supports 64-bit code and has extensive scripting options, I can’t seem to find any evidence that it can link code of different modes together, not to mention that I’d still need to have a custom assembler to output data to some format it supports that keeps the necessary information.  Also, if LD supported this, linux could get to 64-bit mode in less than 1KB of code, and that certainly isn’t the case right now.

Hopefully other people will be up for trying this out with Inventor IDE Alpha 5b once it’s out (just the settings dialog box left).  Anyway, I’ve written some code for more extensive debugging that I’m anxious to test out, and I still need to get the other CPU cores up and running, but that should be much easier now.  😀

## Inventor IDE Alpha 5a and Inspiration

After fixing dozens of bugs (probably between 50 and 75), adding a Stop button to stop a running app, and adding support for building operating system code, Inventor IDE Alpha 5a is now out.  🙂

It now no longer freezes if an error (like running out of memory) occurs while compiling, and numerous other checks for errors were added.  If you get an error message, please send me a screenshot and a description of what led to it.  I’ve got a cool idea for a full debugging system for reproducing errors, but unfortunately, there’s no way that I have time to make it.

Alpha 5b should be out soon with a project settings dialog box, so that you can choose to make a 64-bit executable, set up the segments in an operating system build, and hopefully create a DLL that actually has exports.

Alpha 6 will introduce what I consider the first truly awesome feature of Inventor IDE, and it’s something I’ve been waiting years to see in an IDE, even though it’s so simple, but you’ll just have to wait to see what it is… or badger me until I spill the beans.  😉

In other news, Code Cortex is now incorporated, so I can start applying for funding, and if that works out, hopefully I’ll finally be able to start paying people for their work.  Also, I stumbled upon some notes I took at a CUTC 2006 presentation by Infusion Development‘s CEO, Greg Brill, and I remembered how inspiring and encouraging they were:

• find the opportunity missed by the industry now; who’s being ignored?
• business plan sometimes comes after the success
• shut up; do something!
• try it; fail
• business is easy: find a need, fill a need
• learn
• if you love it, you’ll do it

I get a lot of discouraging, useless business advice, usually from people who only care about making money, so Mr. Brill’s talk gave me hope that one person really can make a difference.

## Computer *Science* “Research”

•May 12, 2009 • 8 Comments

I don’t want people to think that I’m just some academic hating on companies from an ivory tower in my last post, because at the moment, I blame academia about as much as companies for the lack of innovation for a very different set of reasons.  My apologies for another ranty, depressing post, but I need to let off some steam after the latest encounter with my honours project supervisor.  I’d like to share an anecdote about one of my roommates when I was at Microsoft.

Joey Jo-jo Junior Shabadoo (fake name at the request of the real guy) is a PhD student (or postdoc, I can’t recall) in Sweden, specializing in wireless sensor networks (WSN), and he was in a team at Microsoft Research, developing what sounded like a software toolkit for WSN.  Everybody on his team had a PhD.   There’s just one problem: PhD computer science researchers in North America aren’t required to know anything about software, or even computers… or even science, whereas PhD computer science researchers in Sweden are still required to know how to develop software, since how else could you really determine whether a hypothesis makes sense?  Why is this a problem?

He was the only person on his team who had actually developed much software in recent history.  There was code by his team when he got to Redmond, but all of it was a mess, a much of it was untested, and as it turned out much later, what little of it was tested had faulty tests.  Worse yet, his team hadn’t decided what hardware they’d be using yet, and somehow they had code written for some unknown hardware.  When they eventually determined what hardware they’d use, much of the code no longer made any sense.

Joey was pretty angry about all this, and if you knew him, you’d know that’s saying a lot, since he’s such a light-hearted, fun character.  Moreover, he was determined to get this stuff working.  This guy worked his ass off to pick up the slack for his team.  I’m talking 10 hours Monday through Friday and 8 hours Saturday and Sunday.  He once worked 12 hours on one Saturday or Sunday to try to fix a bug that turned out to be because a message passing function that was supposedly tested had never been tested with an odd number of bytes, and it failed much of the time when a message had an odd number of bytes.  And no, he was not getting paid for overtime.  By the end of it, he was on the verge of giving up on computer science entirely, and he sounded pretty depressed.  I don’t blame him; I would’ve quit or killed myself, since it sounded as bad or worse than my experience at RIM the term before.  I hope he’s doing well, wherever he is now, and I’ll send him a link to this to check that I’ve told the story correctly.

Scientific method refers to bodies of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning. A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.

The above is from Wikipedia.  What does it have to do with the story or my distaste for the state of computer science research (in North America)?  We’ve completely lost the science in computer science.  There is not a single computer science course I’ve taken or heard of that actually shows anything that would come close to meeting the above definition of scientific.  We don’t make hypotheses and test them in any courses, we’re just told to come up with something that does X, often just on paper.  In terms of computer science “researchers”, they appear to often just make hypotheses and publish them.  The “test” is whether other researchers accept them, but there’s no real testing of them, no measurements, no experiments, often no principles of reasoning or evidence.  Often when there is testing, it’s completely irrational, and is either very skewed in favour of those researchers or the tests indicate absolutely nothing.  Why?  Because we’re all implicitly taught that there’s no value in testing hypotheses, so they have no idea how to select tests or do systematic testing; they never see it in undergrad, and it evidently doesn’t get through in grad studies either.

I could give examples of this until I become ill, but I’ll just give three very different ones:

• Here’s a paper that claims that OpenMP makes for faster code and is easier to use than MPI for parallelizing. Their testing: get students to use OpenMP on a single computer (which is what it’s designed for) and get students to use MPI across multiple computers (which is what it’s designed for) with the same total number of CPU cores; time the students and their code.  If you’ve done anything with parallel computing, it’s pretty easy to come up with a correct hypothesis about this experiment.  They neglect to mention that the numbers are completely uncomparable.  One is using several threads on a single computer using shared memory (fast) to communicate implicity (easy), but doesn’t scale up past one machine (since it isn’t designed to); the other is using several processes spread across computers using a network (slow) to communicate explicitly (harder), but scales to hundreds of thousands of machines.  Of course OpenMP will be faster and easier to use in their small-scale test case!  If it wasn’t, it’d be a complete failure on the part of OpenMP!
• Here’s a paper that “proves” that P=NP. It took about 5 minutes for me to find a counter-example.  Other people found counterexamples too.  So, the author made a completely new algorithm and claimed that it “proves” that P=NP.  I implemented it, picked an arbitrary 10-variable example I found, and it got the wrong answer; others have too.  He’s made other algorithms for P=NP that have been disproven as well.  I was sad to learn that as of recently, the author still thinks that his algorithms prove that P=NP.  This is a simple matter of that he doesn’t test his work at all, and leaves it to others to prove him wrong.  It’s taking “innocent [of academic fraud] until proven guilty [of academic fraud in every paper]” to the extreme.
• Here’s one that I wrote myself on jump encoding. Yes, I’m even willing to bash my own work. It turns out that the statement of assumptions in the abstract is wrong.  It’s true that it the algorithm isn’t always space-optimal if an array is declared with a size dependent on the negative of the number of bytes in a section of code, but it’s also not always space-optimal if people put in preprocessor statements that exclude things based on the size of code.  Of course, it turns out that then it’s NP-complete just to determine whether there’s a way to include/exclude code such that no paradoxes occur (e.g. if this code is included, exclude it, and if this code is excluded, include it).  That’s why assemblers restrict what you can and can’t do with symbol references, but it is possible to construct some case where the algorithm isn’t space optimal without running into those restrictions on some assemblers.  I just happen not to care about those cases, since they’re ridiculous, and not supported by Inventor IDE (they’re hard enough to support at all, let alone being space-optimal).
What’s really missing from this paper, other than a formal proof of correctness, is any sort of empirical check of correctness or performance analysis, since I hadn’t even implemented it yet.  The eventual implementation is fairly similar to what’s described, and it does check out empirically with the testing I’ve done, but I still haven’t done any performance testing to see whether it actually runs any faster in practice than the worst-case O(n^2) algorithms that are a bit simpler.  A few people pointed out quite rightly that a very simple O(n^2) algorithm could likely beat the crap out of this O(n) algorithm, which I neglected to state in the paper.  Hopefully I’ll get a chance to check that once a friend of mine has finished making the next big feature of Inventor IDE, since it’ll help a ton.  Maybe I’ll post a new version then.

In order to move forward as a field, we absolutely need a course that teaches some form of experiment design and systematic testing, even performance testing.  I don’t mean garbage like the QA courses I’ve heard of, I mean real testing for things that are really worth testing.  There was only one assignment I’ve ever had (assignment #1 of COMP 2402) where we measured the actual time of a piece of code, and it was a toy example with only 5 data points.  I find that outrageous and unacceptable.

In relation to innovation, the academic world is so full of useless papers that finding anything useful in some fields can be a life’s work.  How can people possibly innovate when they can’t find out what’s actually useful and what’s just empty hypothesis?

I haven’t even touched on the problem that so much research is doen for the sake of “wouldn’t it be cool if ____” (the answer is no, by the way) or “this is popular, so it must be important”.  Researchers are often so separated from the real world that they honestly don’t even realize that most papers will probably never help anyone.  It reminds me of how a few weeks ago, some lawyers tried to convince me that most users actually read and understand the legal agreements on software.  *sigh*  😦

## Inventor IDE Debugging

First, I’d like to say sorry that I haven’t yet posted the contest that I’ve planned to hold.  I’ve now got a source of legal advice, and since the first contest is for a fairly important-but-simple component of Inventor IDE, I don’t want to risk screwing it up until I get a lawyer’s ideas for it.  I might hold the competition on Top Coder, but it looks like they’ve changed their model of letting people host contests.  If I can find someone up for making some really cool interactive GUI components, I could contract it out to them, but I’d like to see what different people can come up with, since there’s a lot of room for creativity.

Also, sorry to those who’ve experienced bugs with Inventor IDE alpha 5.  I was going to make a special build just for one bug reporter so that I can fix a freezing bug that I can’t reproduce on my machine.  However there should really be a more extensive debugging system, logging information to files that can help me identify what went wrong when I get a bug report.  As such, I’ll release the regular version with logging of error information, and a special debugging version that will periodically check for model consistency and log extensively, even when no error has occurred, so that errors can be fixed more easily.  I can’t give a reliable ETA on this release yet, since I’ve got a lot going on this week, but it’s something I really need for future releases anyway.

## Inventor IDE Alpha 5

After many months of mind-breaking work, Inventor IDE Alpha 5 is finally here with built-in assembling, a greatly-improved UI, and a new sample application!  These should make the rest of the assembly language video tutorial go much more smoothly.  I encourage those of you running Windows to download it and press the bright green Run button to see what I mean.  (Sorry to Linux users without WINE; I only have it compiling to 32-bit Windows executables so far.)  I was going to post more videos this weekend, but some last-minute bugs cropped up that couldn’t be ignored.

Also of note is that soon (probably a couple of weeks, since I’m swamped with 3 performances, 3 assignments, and 2 exams in these next 2 weeks) I’ll be posting details of some competitions I’ll be holding to let you all get involved with developing important (and very cool) pieces of Inventor IDE.  It’s no fun doing this all alone, so I’m eager to see what people come up with for creative and cutting-edge components.  😀

I don’t want to jinx it, but if all goes well I may be able to get enough government funding for Code Cortex to start paying people a salary too.

## Preview of Inventor IDE Alpha 5

•March 4, 2009 • 2 Comments

It’s getting close, I promise.  It’s been the biggest change yet for this version, and set back development on Own OS a couple of  months, but here’s a screenshot of the unstable version of Inventor IDE version alpha 5.

Screenshot of Inventor IDE Alpha 5

Yes, your eyes do not deceive you.  There is no menu, and there is a Run button.  The find buttons won’t work until alpha 6, but if all goes well, you should be able to download Inventor IDE, extract it, start it, and just hit Run.  It won’t depend on MASM anymore; it assembles the code on its own.  This was a ridiculous amount of work, but the great simplicity of just being able to press Run with no configuration of external programs is well worth it.

I’ll also be updating the sample project to be much more useful for the video tutorials, displaying the input and output images instead of making you look for them all the time.

Alpha 6 will be a much smaller change, so it won’t take nearly as long (I hope).  At that point, it’ll be just about beta time!  🙂