Antony and Rainer are back again for a webinar next week following the - phenomenal - success of the previous one. But let me give some background.

The history of trading can be considered to be the ongoing search for arbitrage. “Buy low; sell high”. You get “Alpha” by finding new arbitrage opportunities. You earn fat margins until it’s incorporated in new quantitative models. A French ex-physicist gets a prize - and then it’s just Beta. That’s how it’s evolved for HFT too. A good part of recent C++ development has been made with a glance over the collective shoulder at HFT.

When I was first introduced to that space - high-frequency trading - it wasn’t what it is today. I’ll give you an example. I was at a big European bank in HK - one of the biggest derivatives traders in the region. We had a system that integrated trading across all the listed options exchanges in the east-Asian time zones: Bangkok to Sydney, taking in Singapore, Hong Kong, Taipei, Manila, Seoul and both Osaka and Tokyo.

We had a stable system, but it was getting long in the tooth. I led the team working through a set of upgrades. We made piecemeal changes as we discussed our new ideas. Then one day it stopped working. We got a call from the trading floor: “we’re behind the market”. That’s a worry - that our prices could be noticeably out from their other views of the current market price. They could enter money losing orders. We checked the logs - had an exchange feed stopped? Was there a hung process? But nothing seemed wrong…

The calls from the Floor got more urgent - “hey, we’re way out!” “come up stairs now!”. Indeed as time went on they were clearly seconds behind… then tens of seconds… This was disaster. Trading stopped.

Beyond the money being lost, this had regulatory implications. We were a registered market-maker. We had contractual obligations to provide liquidity. If we didn’t know the price we couldn’t do that. The clock was ticking before penalties were imposed. Millions…

So we dove in.

I won’t bore you with the details of our search - it was long, stressful and required a great deal of coffee. There was no resolution at 3am that night when I reached home in exhaustion. We’d seen strange masses of string comparisons - of all things - in the ptrace logs! What was going on? I drifted into fitful sleep.

I suddenly woke at 6am: data structures! Strings were being compared because we were matching records from the exchanges and placing them into internal data structures. It’s just that it was happening too often. I grabbed more coffee and raced into the office. I went to the base class of the base class of the structure handling our processing queues. There it was.

As data was taken from the exchanges - market-prices, fills for orders etc - we processed them. They were placed in those internal structures and passed them to subsequent threads of execution. For performance reasons we used hash-tables. With a hash-table you pre-allocate a number of “buckets” that is scaled to be about the amount of data you expect to have in the table. The few that don’t fit - hash-collisions - are placed in a linked-list connected to the associated bucket.

The constructor of our hash-tables is what defines the bucket count. The bucket count parameter was missing. Therefore it used the default. The default was 20 - two zero. There were typically 10 million entries in each hash-table. The result, in our system, was that each table was in fact a set of 20 linked lists 500000 entries long.

So, not long after the start of the trading day, every trade, fill or message would have to traverse multiple linked lists of 500K length comparing themselves with other entries identifiers - all strings. I fixed the problem by adding one parameter in one place.

But how did this happen? I viewed the checkin history: a developer long departed in another region had made the initial check in 4 years before when the system was first introduced. At that point trading had been very thin. The amazing thing is that in all those 4 years, it had never been noticed before! It’s only with the financial volatility preluding the global financial crisis that volumes had finally revealed the problem.

What stunned me is just how performant these machines were. They could sustain masses of unnecessary processing and no-one notice for years!

Well they found a use for that processing space - that’s where HFT lives. Carl Cook explains it well in his “a microsecond is an eternity” lecture. It is a great mother-load of Alpha that lies between the ponderous synaptic firings of mere human trader brains. That experience in Hong Kong revealed to me how vast the HFT opportunity is.

It’s not an exaggeration to say that HFT’s needs significantly revitalized C++. The value of saved microseconds means that implicit costs like garbage collection in other languages are unacceptable. Instead, interest has focused on C++’s template meta-programming. It has become a marketable skill in its own right. With C++20’s concept keyword, it has now has been brought into the language.

In our webinar firstly Antony Peacock will discuss real-world HFT C++ techniques with our moderator Dr Jahan Zahid - himself an algorithmic trading veteran. Rainer Grimm, C++ and Python trainer and mentor, will then take us through concepts. Finally, Jahan will ask me about some of my experiences recruiting financial developers.