Graph all the things
analyzing all the things you forgot to wonder about
interests: number theory, computational complexity
Two algorithms for finding all the primes up to are the Seive of Eratosthenes and Sieve of Atkin. Eratosthenes is very simple - label each number above 1 as prime to begin with, then go through each number in order and, if it is still marked as prime, mark all higher multiples of it as composite. Atkin is much more complicated, but in theory has a faster runtime - as opposed to . I implemented both in Julia and compared their performance:
The little wings around the data points are uncertainties for runtime. As I've implemented them, they have about the same performance for large . That term is really small. But Atkin actually has a much higher overhead. To see it, here's a log-log plot of the same data:
I wrote these algorithms in Julia, and the code is linked at the bottom of the page.
The proof that Sieve of Atkin works involves abstract algebra, and is much longer and less obvious than the proof of Sieve of Eratosthenes, but the payoff is splendid. It pleases me that someone thought of this approach to finding primes.
What I didn't mention before is that there is actually an even faster version of Sieve of Atkin (in theory), which you can read about here. It runs in time (sublinear!) and just memory via "page segmentation": it breaks the range from 1 to into intervals of size roughly , then works on each interval in sequence, using information from the previous interval to educate which numbers to bother checking in the current interval.
However, this "even faster" version is really bothersome to code, and has runtime with such a high constant factor that it never really outperforms the approach I implemented. As a result, no one ever codes it. Not even me.
If you've ever been taught to properly shoot a basketball (not that I have, but anyway...), you know that the easiest way to do something - the way you already seem to know how to do - is often suboptimal. That's why I'm a bit surprised that so many companies are using python as their language of choice for serious machine learning or data science. Python was developed to be simple and all-purpose; a language that can do everything, but nothing especially well. Numpy is an attempt to compensate for python's numeric slugishness, and a decent one. But sometimes you need to partially rely on native python branching/looping/etc, which causes performance to drop terribly.
Julia seems like a much more serious language for dealing with data. In any case, the obvious, native Julia implementation is faster than the Python numpy one:
A huge advantage Julia has is that it compiles to achieve good performance. But there are benefits other than speed, like type safety and storing boolean vector data in BitArrays where each boolean is actually just one bit. In contrast, numpy allocates an entire byte for each boolean, eating up your memory.
I put my code in this Github repo. These aren't the fastest implementations (primegen or something similar holds that title), but they are about as fast as you can get for this level of effort. Using a "wheel" of primes for Eratosthenes can make it faster, but I deliberately avoided doing that because it's not part of the canonical algorithm.
Note that the huge constant lists in the Sieve of Atkin implementation are just lookup tables to save work later on. I precomputed them with another little Julia script.
Someone emailed me, pointing out that my original Python implementation actually used a native Python range where it could just use numpy indexing. I fixed this in the implementation and charts here, and it make Python look much more reasonable, but Julia is still faster. Along the way I updated the code for Python 3 and Julia 1.8 (wow, it's been a long time). I also corrected some misleading sentences in the post.