Humans vs computers

Computers are still getting more and more powerful every day, though physical limitations have slowed down the pace significantly as far as single-core capabilities are concerned. It's not rare to hear statements along the lines of "one day computers will surpass humans in computational power".

It's important to not take such statements out of context, however. Simply having the computational power of a human brain is not exactly useful. Not to mention, it's also a completely unfair comparison, as computers and humans do very different things and have their own specialities. When was the last time a human managed to produce a billion digits of pi? Can computers talk so intelligently that a human won't notice?

Firstly, note that while I'm firmly of the belief that artificial intelligence (on the level of humans) is a perfectly reasonable goal, we are not there yet and we are still probably a long way off. From a physical perspective, the architectures that human thought and computer processing run on are very, very different today, although with massively parallel processors we are getting a little bit closer. The fact that computers are Turing-complete doesn't really say much about their computational power nor usefulness in various applications; after all, there are plenty of ways to implement them (and many more ways to construct Turing tar pits), and we happened to have stumbled upon and stuck with this Von Neumann-like architecture, which is great for things like numerical computations and such, but for other more artificial intelligence-related tasks it's a bit more troublesome to deal with. Lots of progress has been made in various areas (such as facial recognition, natural language processing, etc) however, despite all this. It might be tempting to think that if we had a completely different architecture of computations then maybe some of these problems wouldn't be so hard, but I suspect that we tend to gravitate towards the more mathematical way of describing things anyway and that's unlikely to change. (Can you even imagine what an algorithm designed for a neural network would even look like on paper? I doubt most developers would want tread here.)

That also happens to be the second problem that we haven't solved. Imagine teaching a sufficiently smart alien child how to differentiate -- he might not know much about the theory behind calculus, but you can at least teach the student all the rules that can be applied mechanistically (and recursively) to obtain the result. Now try teaching such an alien how to recognize and differentiate human faces ... this is essentially what solving an AI problem is like. We don't even have full knowledge of the mechanism of facial recognition (much progress has been made though!), considering that many areas of cognitive psychology are still quite active today.

So what's the conclusion here? Well not much, other than just a pessimistic reminder that the so-called "AI" is still probably a long way off, but with hardware upgrades (dramatic ones, mind you) and better algorithms / techniques / architectures it might not be fantasy one day (I hope!).


Linking order matters, really

This is probably one of those biggest "d'oh" moments I've had in a while. The goal is quite simple: to compile a program that depends on a certain library (armadillo in this case, but the actual library involved is irrelevant). Sounds easy right? I would think so too.

This is (a simplified version of) what the makefile runs when it does the final linking stage:

g++ -Wall -larmadillo -o foo foo.o

And it worked just fine on the Debian box.

It doesn't, however, work on Cygwin:

undefined reference to `_wrapper_dsyev_'

Now, the first thing I suspected was that I didn't build the library properly (after all, not everything works out of the box on Cygwin). So I redid that a couple times with various different settings.

And it didn't help.

Then I tried disabling the "wrapper" feature that Armadillo used and linking to BLAS and LAPACK directly.

No that didn't work either.

I went so far as to actual browse through the library using Dependency Walker to make sure that the symbol _wrapper_dsyev_ (or just _dsyev_ without the wrapper) is actually there, and it is, in fact (I thought that maybe Fortran used a completely unconventional name mangling convention or something, but it was not the case).

Yet, despite everything I tried, I forgot to check one thing -- the ordering of the linker arguments -- the idea just eluded me for a long time, until I stumbled upon a couple StackOverflow threads by accident. I wasted a lot of time thinking that there was just something wrong with the environment (as in I didn't set up the libraries correctly, somehow) because the problem only manifested itself on Cygwin.

Mind you, I was aware that the order matters, which is why I prepended the library before I included the object files.

But that's completely wrong! The right way is to append the libraries. To quote GCC:

It makes a difference where in the command you write this option; the linker searches and processes libraries and object files in the order they are specified. Thus, ‘foo.o -lz bar.o’ searches library ‘z’ after file foo.o but before bar.o. If bar.o refers to functions in ‘z’, those functions may not be loaded.

Linking order really does matter, and you want the dependencies to come after the dependents, not the other way around (as I had wrongly assumed). I found it counter-intuitive at first glance (to include the libraries after the object files), but it turns out I had the wrong mental model of how ld (the linker) worked. What actually happens is that ld will keep track of any undefined symbols initially encountered and the resolve them when they are later found in the library. Any symbol that is not used gets discarded and won't be available for the object files that are seen afterwards (don't ask me why).

Shouldn't be too hard to solve, though. But the hard part is trying to automate this, since I use a (custom) build system to generate the makefile. Hypothetically, one can use the nm command to dump the defined and undefined symbols of the object files and libraries to produce a dependency graph out of it. Then one can determine a valid order using topological sorting to avoid this linker issue, at least assuming there aren't any circular dependencies (!).

But that just seems a bit overkill here, since all I have is just one library. The easy fix (for now) is to just put all the linker options at the end. Or you can even use a special linker option to repeatedly search the libraries until all symbols that can be resolved are resolved (at the cost of performance).


Moving onto C++11 ... slowly

Despite the fact that C++1y is only a year away from being finalized, C++11 adoption is still quite slow and that is quite problematic for "early bird" developers (like me). The situation is quite similar to the Python 2 to 3 mess (that still isn't resolved), but unlike Python -- a dynamic scripting language -- the feature detection mechanisms in C++ are rather lacking, to put it nicely (and you have to resort to ugly preprocessor macros).

There are many new, interesting features in C++11 that are helpful for template meta-programming: some of them reduce verbosity, others add new features, and a few can potentially improve the performance of programs (e.g. move semantics).

The table here from apache.org provides a handy list of the features that are (and aren't) supported by various compilers. Some of the features that have very strong support from many compilers are (at least the ones that I do care about):

  • auto (extremely useful: type inference can really reduce the verboseness of the language)
  • decltype (support existed prior to C++11 but every compiler has a different implementation so it's good to unify them under one name; beware though, some compilers will choke on recursion)
  • long long (well, this is actually an old feature, recently made official by the standard so it's no surprise that every compiler supports it)
  • Right angle brackets (not very noteworthy, but I suppose it's nice to save one space character every time "> >" occurs)
  • static_assert (useful, though not absolutely required and some template + macro tricks can emulate the behavior to some extent)
  • Variadic templates (another great feature, and with surprisingly good support too)

The compilers of interest to me are the 3 major ones: GCC, Clang and MSVC, since these are the ones I use primarily. Intel and IBM ones also deserve some notice, but I'm not really familiar with them enough to actually say anything about them. Personally, I like using GCC and Clang (especially since they have practically the same interface) and both are quite good at keeping up with latest standards. MSVC on the other hand is often slow with adoption so it tends to be the lowest denominator among the three. However, despite all this, it's still important to realize that most users will not have bleeding edge versions of GCC installed (there's a server from the department that I occasionally use for computations and it has GCC 4.1, which is from all the way back in 2007!).

Why did I write about this topic? Well I just got bitten by the unfortunate fact that I use Cygwin GCC on my home computer and it doesn't have fancy range-based for loops. It's not hard to fix -- just makes the code messier to write. But for some things, like variadic templates and decltypes, there's often no viable alternative, so I should be thankful that it's fairly well supported. I want to be able to use the newer features -- after all that's what they are there for, but in the name of portability it's often not possible to do so.

At this point I might as well just give up and say "if it doesn't compile on your computer, then go upgrade to a real compiler!" Without going to that extreme, I think it's wise to just draw a line somewhere. The features that I mentioned above are probably a fairly lenient guideline should one decide to use some of the C++11 features without making it uncompilable on anything but GCC 4.8+ (the current stable version, as of writing).

Hopefully, some day C++11 will be as widely supported as C++03, though I suspect it's probably at least a decade away, in which case another new standard will have evolved and the cycle begins anew.


Pasting Unicode characters in Emacs

I always ran into issues when I tried to paste Unicode characters into Emacs. For example, copying a greek small letter rho (ρ) would result in a question mark (?) in the Emacs Window. Oddly enough, copying Unicode characters from Emacs usually works just fine.

I had originally thought it was a bug, but eventually realized that one of the settings in my Emacs profile (.emacs or init.el) was root of the problem:

(set-selection-coding-system 'utf-8) ; Don't do this on Windows!

I had previously added this line as part of a hack to force Emacs to use UTF-8 as the default buffer encoding. As usual, I copied the code from this StackOverflow answer without too much thought and ended up causing an unintentional bug, which I didn't realized until several months later. The most unintuitive part of this line is that in Emacs, set-selection-coding-system and set-clipboard-coding-system actually do the same thing, but the connection between selections and clipboards is not exactly obvious unless you know some Emacs jargon.

Anyway, the fix is simple: just remove this offending line, since it doesn't really affect the encoding of the buffer itself (so it's not needed for the hack to work). If that's not an option, you can use the fact that that Windows API uses UTF-16LE as the standard Unicode encoding to override the clipboard/selection coding system:

(set-selection-coding-system 'utf-16-le)


Lua, compared with Python

My current personal project concerns with Lua programming. As part of this project, I needed to write a generic serialization algorithm for Lua objects, with a few constraints that I chose to impose upon it (it would've been too easy otherwise!). I might discuss more about this serialization algorithm at a later date, but that's not really today's topic of concern. Besides that, there are other much more complex issues that I'll need to tackle.

I daresay it's a fairly ambitious project—I'm not sure how far I can go before I say "screw all this" and never look at it again. The holidays provide me with some extra free time for me to work on it, but it's still quite a challenge. Either way, the experience was quite fruitful and continues to be, regardless of the outcome of project.


Firstly, what is Lua? It's not exactly the most well-known programming language like Python or Java, but there is a definitely sizeable group that works with it regularly (and certainly not a toy language like Whitespace). I like to think of it as a primitive, dynamic programming language that is typically used as a scripting language embedded in other applications. Prior to Lua, my experience in dynamic programming languages comes from Python, so expect me to make many comparisons with Python in this post.

Although I haven't looked for benchmarks to verify the claim, the general perception is that it is very fast compared to other scripting languages, as well as being clean to implement (granted, being a dynamic interpreted language, there is always the unavoidable cost that makes it perform at least an order of magnitude worse than static compiled languages like C or C++. It doesn't bring a lot of baggage like Python does—this has its ups and downs. Because of the speed and leanness, it is fairly popular within the gaming circle since many games utilize it as a scripting language.


The syntax is somewhat reminiscent of Python, except Lua's keywords are more verbose and allows free-form syntax with whitespace and newlines playing a relatively minor role (alas, I still don't understand why Python's syntax has to be so rigid for a scripting language). This means you can indent Lua however you like and break statements over several lines or just squeeze them all on one line (not recommended for readability, of course). Semicolons can be used to terminate lines, just like in C and Python, but they are really just for decoration since they are completely optional.

Lua operators are similar to those of Python, although Lua uses ^ (caret) for exponentiation and .. (two dots) for string concatenation -- strangely enough, an operator of its own rather than overloading the addition operator like in other languages. It is said that the concatenation operator is needed for the sake of disambiguation since Lua can coerce numbers into strings and vice versa when mixed together.

Lua's attribute operator . (dot) works in a way that is not unfamiliar to those versed in C-like languages. It can be used to access variables and functions embedded in other objects (using tables, as will be explained later). However, it also introduces an alternative form : (colon) for calling functions within a object while implicitly passing that same object as the first parameter. Notice the similarity to Python methods, although the programmer has to explicitly use the colon access operator when calling with this "method" convention in Lua. This is sort of like how there is a :: (two colons) operator in C++ that permits access to elements within the scope of a class or namespace (corresponds to the dot access operator in Lua), which is separate from the . (dot) operator that is used to access methods in a class instance in C++ (corresponds to the colon access operator in Lua).

Functions can be defined in Lua in a variety of different ways. In particular, a function can be declared in another function, since a function block is not specially treated in any way compared to, say, a for or a do block. Any variable that is outside the scope of the function can be accessed from within, even if the outside scope no longer exists at the time of calling (this is implemented via closures). This feature exists in Python as well, although Lua has greater flexibility in its function-defining syntax: you can declare anonymous functions in-place, e.g. as the argument of a function, or as the rvalue of a variable. Python does have lambdas but they are much less powerful syntactically compared to Lua's anonymous functions.


Lua has a special type called nil, which is analogous to None in Python but it's purpose in Lua is far more crucial. Why is nil so special? Well, nil basically represents the absence of a value. If a variable doesn't exist, it will have the value of nil, which can be checked by equality with nil. If a table (I will come to this in a bit) doesn't have any item associated with a given key, it will have nil as its corresponding value. If a parameter of a function is omitted, that parameter will have the value nil. On the other hand, setting a variable to nil will essentially erase that variable's value (and same goes for tables), although this doesn't guarantee that it will free the memory associated with the value (not a big deal since Lua has a garbage-collector).

Lua has only one number type, cleverly dubbed numbers. These are typically floating-point numbers, and can vary depending on the implementation used (usually it's double-precision). It also has a boolean type, which is of limited use mostly since conditional statements automatically treat any non-nil and non-false value as though it were true.

The string type is immutable and can contain any ASCII character from 0 to 255 (on most implementations, at least). Lua has no understanding of encoding, so its so-called "string" is really an immutable array of bytes that supports some basic string operations (sort of like C arrays but without the horrors of memory management). Interestingly, Lua does have some rudimentary form of regular expressions called "patterns". They can be very useful but are nowhere near the potency of even regular languages, let alone the power of advanced Perl/Python regular expressions.

Much like Python, Lua's type system is dynamic and thus allows variables to hold any kind of data while retaining their type indentity. Lua's type system is very simple however, compared to Python. It only permits the 8 fundamental types (some of which were described earlier) and you cannot define custom types without altering underlying implementation. This doesn't really reduce Lua's expressivity though, because it compensates by providing an extremely flexible type: tables.

Tables are mutable, associative arrays (implemented as hash tables) that map values onto values, where values themselves can be any one of the 8 fundamental types, including tables (possibly recursively). Variables store only the references to the tables so the same table can be modified by any of their references. They are equivalent to Python dictionaries, but with added syntactic sugar for flexibility. For example, accessing attributes of Lua tables is equivalent to accessing its elements with the key specified by the name of the attribute (in Python, the built-in dictionary type does not allow this). And thus one can construct complex data structures simply by utilizing the tables. It is rather revelating to realize that just about any complex data types can be mimicked using Lua tables.

Arrays don't exist as a unique entity in Lua. They are merely represented as tables, indexed by an positive integer. It is not as inefficient as one might imagine because the implementation does try to optimize for such a usage. One of the most annoying quirks when it comes to arrays is that indices always start with 1 rather than 0. This takes some effort to get used to. Regardless, even though arrays aren't a fundamental type, they are used quite frequently and are generally treated different from general tables.

Functions first-class objects in Lua, just like in Python. This means that functions can be stored in tables, which allows tables can be used as namespaces. Python essentially does the same—it implements objects using dictionaries (the __dict__ attribute)—but it doesn't do so as transparently as Lua.


One thing to get used to coming from a Python world is that variables are global by default in Lua, unless the local keyword is used (loop variables are always local though). This can be tricky, as it is easy to unintentionally create stray global variables or overwrite global data unintentionally.

There is in fact an important performance consideration when it comes to variable scoping: Lua variables are nearly always faster to access in local scopes than in the global scope. Since functions are also stored in variables, it is generally a performance improvement to store the functions in a local variable before calling them repeatedly (I've profiled this personally). The local variable doesn't make a copy of the function since all that gets stored are function references, but it greatly enhances the speed by caching the function's reference in a local variable, which is faster to access (in part because the global namespace/table is very polluted). This does cause a bit of a coding mess, as you will often encounter many local function declarations at the beginning of a script (even built-in functions work faster when cached this way).

Access to local variables are often fast enough that caching an intermediate calculation is generally worthwhile if it will be used more than once. The reason is that looking up a function and calling it is often quite expensive compared to reusing a local variable. Note that these are just general performance observations—always profile when in doubt!

A personal remark: I prefer Python's local-by-default system as it avoids unintentional mistakes, but it is not exactly trivial to implement in Lua because its scoping system is not quite the same—in Lua, any block can act as a scope (kind of like in C), so variables can be local to any one of them.

Function calls

Calling functions with arguments in Lua is a pretty straightforward business, unlike Python where you'd have to figure out the subtleties of variadic and named parameters. Python is stricter in that unless the callee requests it, the caller isn't allowed to supply excess arguments. Lua just ignores any extra arguments you supply, which can be both harmful and beneficial. If you have an array of values that needs to be fed into each parameter of a function, you can convert it using the unpack() function -- it does what it literally means: unpacks the array into a group of parameters to be passed into another function.

The callee's job of parsing the arguments isn't so easy though in Lua. Python does a good job of sorting out the arguments for you, while in Lua you're basically on your own. Omitted parameters receive the value nil, so it becomes the callee's job to check for omitted parameters and give them a default value if the caller doesn't supply them. There are no named parameters in Lua, but it isn't too difficult to simulate using tables, albeit more verbose.

Variadic arguments in Lua are indicated by the funny ... (ellipsis) token. The token represents all the remaining, unnamed parameters and it's "already unpacked", so it can be passed onto other functions using the same name, or converted into an array by using {...} (an ellipsis surrounded by curly braces).

Returning values from functions is pretty trivial: you simply return each value separated by a comma, kind of like a Python tuple. However, Python will complain if the caller doesn't provide enough variables to hold the return values, or, if you provide only one, it will stuff all of the returned values into that variable as a tuple. Lua handles it differently: it will return as many as it can, throwing away any of the returned values that you don't catch. If you want to catch all of the values as an array, it is necessary to surround the function call with curly braces. This can be a convenient feature in some situations.

Error handling

The concept of exceptions (as in C++, Python, C#, etc.) is foreign to Lua. Its error-handling mechanism is very simplistic: you call a function named error() with a string message and it gets caught by some function up the stack. It does not possess a specialized error-catching block like other languages (e.g. try .(pcall() and xpcall()) that can be used to catch an error that may occur within a function, passing along a stack dump of where it occurred. To catch an error within a block of code, it would have to be contained within an anonymous function, which adds to the verbosity.

For the most part, the errors are primarily used as an indication of programming error (or logic error as some call it). Unexpected runtime errors seldom (and shouldn't) use Lua's error-handling mechanism, which in start constrast with Python's exceptions-for-anything-and-everything philosophy.


Learning and programming with Lua has certainly broadened my perspective about things. It is quite a wonder that a language can be so simple yet possess such a diverse array of possibilities. In this respect, it reminds me of Scheme, except it feels like a real programming language rather than a mathematical curiosity. It is possible to program in Scheme of course—I have done it myself—but Scheme has too little syntactic sugar to enhance the experience. In contrast, Lua provides a lot of added syntax to allow for different programming paradigms and makes a programmer feel more at home. There is a "metatable" mechanism which provides even more trickery, but I opted to not discuss them here as they are quite complex.

There are some idiosyncrasies in the language that make it feel awkward (like 1-based indices, or its verboseness), but they're not exactly devastating. The biggest downside is the lack of good libraries though—as a lightweight embedded scripting language, it lacks a lot of the "basic" functionalities of other scripting languages ("no batteries included"). For even the simplest tasks, you will find many, many different implementations on the web. So in a sense, Lua is also kind of "low-level" compared to other scripting languages—a bit like working with C, without the pains of memory management. Such is the tradeoff for a simple language.

More info

The official Lua website is small and lean, much like the theme of the language itself, but it is an excellent resource and is probably the only reliable resource out there. Most of the other unofficial documentation tend to be outdated or incorrect so I found it especially helpful that the official reference document is very clean and readable.

The website has a lot very useful information, such as the history of the language, as well as several interesting technical papers regarding its implementation details if you care about them.

Outside of the official documentation, I found the lua-users wiki to be helpful. Like any wiki, take it with a grain of salt.


A silent rebirth

Update: finally got around to cleaning up the older posts -- deleting worthless posts and cleaning up the more useful ones etc. Hopefully it won't look as pitiful as it used to be. Ideally, there should be much less nonsensical rambling in my future posts here.

Three years have passed since my last post here, and many things have changed. After "rediscovering" this old blog of mine, I decided to resurrect it in the hopes of putting new and interesting information for others to peruse. I was pleasantly surprised that I had written ~100 posts here on a variety of topics, considering how much younger I was back then (and how much less I knew).

Most of the older posts have been preserved more or less the same for reference purposes. I would edit them and make them a bit more presentable (the definition of "presentability" varies with age unfortunately), but tediously going through a hundred posts is not something I have remotely the time for these days (maybe I'll recategorize the posts, maybe). I did make the effort of repairing the equations and formulas in some of the more recent posts, as it appears that the LaTeX image server that I used (forkosh.dreamhost.com) has vanished from the Internet. From here on, MathJax will be the platform I use to display equations on my websites. It appears to be a fairly new platform that uses the newer features found in browsers these days to implement mathematical typesetting. The output is pretty amazing IMO.

The breadth and depth of the topics will probably vary a lot so I don't exactly anticipate a static audience. It is mainly aimed at those who are searching around to solve a very specific problem and hoping someone has, even tangentially, attempted to do so as well. I know this from experience, because I personally do it all the time with unfamiliar problems. Unfortunately, real life problems usually go unsolved (unless it's something from a famous textbook that every undergrad has worked on).

My intent is to keep this site or "blog" flowing from here on. No promises of regularly scheduled posts or anything, but I will continue posting informal discussions on topics that interest me, which would be primarily a mixture of programming and physical concepts. Let's see how far that goes.


Peak detection: derivative smoothing

I have been recently working on a Python program that detects peaks in an oscilloscope trace. With help from this post I found, I managed to find the best algorithm to deal with my situation.

Mathematically, finding the peaks (local maxima) of any graph is straightforward, as long as the function is simple enough. All one has to do is to find the x-values where the first derivative is zero, and then apply the second derivative test to differentiate the maxima (peaks) from the minima (troughs).

However, real data tend to be rather coarse, so peak detection can be rather difficult. From the article I mentioned earlier, this issue can be resolved by "derivative smoothing". The typical way of estimating the derivative of a function represented by a set of discrete points is by taking every pair of points and calculating the gradient. Derivative smoothing involves taking more than two points to calculate the derivative. The article says this can be achieved by taking a group of points and calculating the gradient using just the first and the last points in this group, but I decided to try something different: making a linear fit on this group of points and find the gradient of this fit.

I don't know if this is more accurate or appropriate, but I tried it out anyway, and so far it's working well. The only downside is that the calculations are much slower since fitting a set of data points is computationally intensive. In any of these peak detection methods, the precision of the peak is always limited by the discreteness of the values; to improve this precision, one can attempt to fit the peak itself with a fit curve, say a Gaussian (if symmetric), and use the fitted parameters to find a more precise value for the peak.

# Peak detection algorithm
# Definition: a point is a 2-tuple: (x, y)

# Calculates the slope of a linear fit through a group of points
# points: a list of points
def getLinearFitGrad(points):
    N = len(points)

    # For only 2 points just calculate gradient
    if N == 2:
        return (points[1][1] - points[0][1]) / (points[1][0] - points[0][0])

    # Use linear regression
    sum_x = sum([x for (x, y) in points])
    sum_x_squared = sum([x * x for (x, y) in points])
    sum_xy = sum([x * y for (x, y) in points])
    sum_y = sum([y for (x, y) in points])
    determinant = N * sum_x_squared - sum_x * sum_x
    return (N * sum_xy - sum_x * sum_y) / determinant

# Finds the peaks in a set of data
# dataSeq: a list of points.
# smoothingFactor: number of points grouped together for gradient estimate (>=2).
# Returns: a list of peaks of the form [(gradient change, peak time), ...].
def findPeaks(dataSeq, smoothingFactor=2):
    peakTimes = []
    sign = None
    prevGradient = None
    for i in range(len(dataSeq) - smoothingFactor + 1):
        pointGroup = dataSeq[i:i + smoothingFactor]
        currentGradient = getLinearFitGrad(pointGroup)
        if prevGradient != None and prevGradient > 0 and currentGradient <= 0:
            peakPoint = sorted([(y, x) for (x, y) in pointGroup])[-1][::-1]
            peakTimes.append((abs(prevGradient - currentGradient), peakPoint[0]))
        prevGradient = currentGradient

    # Sort list by gradient change: the sharpest peaks will be
    return peakTimes