Good Math, Good Engineers, and the Right Tools Beat the Hardware Hype
In finance technology the instinct when something runs slowly is to parallelize, buy GPUs, or move to the cloud. But most performance problems are not hardware problems. They are mathematical ones.

In quantitative finance there is a familiar pattern. When performance starts to lag, the instinct is to buy faster machines, rent cloud servers, or throw GPUs at the problem. The narrative is that computation is hard, so the solution must be hardware. It sounds reasonable, but it is often the wrong approach. In my experience, performance issues usually have little to do with hardware and everything to do with mathematics and design.
At Algorithmica, where we have been building quantitative systems since 1994, this lesson has repeated itself many times. Over the years I have seen a shift toward quick fixes and convenience layers, a tendency to assemble Python libraries and call it a day. Python is a wonderful language for research and experimentation, but it has also created a generation of quants who are further from the machine than ever before. They know how to use libraries, but not necessarily why they work. They can import a Fourier transform but cannot explain what it really does or how memory access patterns affect performance.
In Quantlab, our in-house platform, we chose another path. We built our own language, Qlang, which compiles directly to machine code through LLVM. It is as expressive as C++, yet avoids manual memory management and the traps that come with it. The idea was simple: give quants and developers full control of performance, but keep the productivity of a higher-level language.
That control matters. We recently implemented a calibration of a Heston volatility surface to S&P index options using the COS method. The mathematics behind the COS algorithm relies on Fourier transforms, concepts that date back to the 19th century but remain as elegant and powerful as ever. By combining a clean mathematical formulation with careful attention to vector operations and data locality, we reduced calibration time from twelve seconds to three-tenths of a second. No GPUs. No parallelization. No external speed-up libraries. Just solid mathematics and good engineering.
This is not a story about technology, but about mathematics. The foundation of performance lies in how well the problem itself is understood and formulated. Once the math is right, the implementation becomes straightforward. Throwing hardware at an inefficient model is like adding horsepower to a car with flat tires. You move faster for a while, but the fundamentals are still wrong.
That is not to say hardware has no place. In high-frequency trading or large-scale investment-bank systems, additional performance can be unlocked by parallelization, vectorization, or dedicated hardware. But these techniques only matter when built on a solid mathematical core. Without it, optimization becomes noise.
The point is simple. Real performance comes from understanding the mathematics and the machine. It comes from reading academic papers, challenging assumptions, and building tools that fit the problem instead of gluing together generic solutions. It comes from hiring engineers who can explain both the algorithm and the architecture.
In an age where technology is celebrated for its own sake, it is worth remembering that good mathematics still wins. Hardware amplifies; it does not create. The real leverage lies in thinking clearly, designing carefully, and coding with intent.
Sometimes, old-school mathematics, applied with modern precision, is still the fastest way forward.