The crawler is actually pretty simple: it will go through a list of open nodes (i.e. users) and construct a new set of to-be-visited nodes (i.e. the users' friends). This set will be used in the next iteration. That's not unlike Dijkstra's algorithm. To to gather a first set of users, it will fetch the users from a couple of groups. All friends of these members, which match our criteria (in this case being located in the Netherlands), will be checked in the next iteration.
The problem with this algorithm? It will only find those users who are either in one of the initial search groups, or are connected via a path to this set. We know from the Small-World Experiment that it is very unlikely that larger connected components exist which are not connected to each other, but I'm sure many users may not have friended anyone else, so these will not show up in the results. But the code was never meant to find an exhaustive set of users, it was meant to be small and simple, so this flaw is OK.
Fun fact: the diameter of the graph (i.e. the maximum over the shortest paths between two of its vertices), as defined by the Dutch users and their friend-connections, is &ge: 14 -- the crawler took 14 iterations to complete. Much more than I'd have imagined!