Explain the bizarre behavior of a trading robot

Strange stock market trends, like the ones we discussed this week, are open to a wide variety of interpretations. They are clearly generated by robot traders, but it is unclear what these algorithms do.

Nanexthe data services company that discovered and visualized the very high speed bursts of curious commands, speculated that bots could provide a millisecond advantage to their operators by confusing their competitors. Experts in high frequency trading Michael Kearns from the University of Pennsylvania and MIT André Lo disagreed with this assessment.

Kearns offered two reasons why Nanex’s “quote stuffing” thesis seemed unlikely to him. First, it’s technically not easy to gain this advantage and second, the data suggests that there really aren’t any competitors to beat in the specific circumstances in which bots operate.

“The quote stuffing theory is that this behavior is kind of like a denial of service attack. You’re flooding some exchanges with these orders. Your competitors need to process these orders in their data feeds, but since you’ve placed them, you can ignore them,” Kearns explained. “The reason this is unlikely is that we can’t think of any easy way for someone to ignore their own orders they’ve placed without taking a risk.”

Technically speaking, there is simply no “ignore my own bogus orders” button that a trading firm could press.

“What a company has is nine real-time data streams from exchanges [e.g. NASDAQ] which tell them what the quotes of these exchanges are in real time. Say I’m flooding an exchange, how do I know which commands to ignore?” Kearns asked. “I need my code to at least grab every incoming command to inspect it just enough to know it’s my command, but then I do not ignore it at all. These commands are very simple. You can view the raw data. And each is like a line of text. What’s expensive isn’t doing anything fancy on that line of text, it’s inspecting it in the first place.”

The second reason quote stuffing is unlikely is slightly more difficult to understand. The basic idea is that we can only see algorithms working in stocks on illiquid exchanges. There are not many buyers and sellers around. In fact, there isn’t. If there were, we wouldn’t be able to see the patterns with such clarity because the offers and demands of others would spoil them. “It creates a problem with the argument that it’s being done to slow competitors down,” Kearns concluded. Essentially, on these specific stocks on these specific exchanges at these specific times, there are no competitors to slow down.

So if it’s not quote stuffing, why would a company engage in this behavior? Lo and Kearns offered their own theories about what might be going on.

“To be honest, we can’t find a good reason,” Kearns said. What is particularly difficult to explain is the diversity and prevalence of models. If algorithmic traders are simply testing new bots – which isn’t a bad explanation – it doesn’t seem plausible that they do so that often. Alternatively, one could imagine that the patterns are generated by a set of systemic information processing errors, but then it might be difficult to explain the variety of patterns.

Kearns has a main explanation though, which he emailed to me after our conversation.

“It is possible that the observed patterns are not malicious, by mistake or for testing purposes, but for information gathering,” Kearns observed. “One could easily imagine that an HFT store would want to regularly review (for example) the latency it experienced on the various exchanges under different conditions, including conditions involving high order volume, rapid price changes and volumes, etc. And one might want this information not just at startup, but on a regular basis, because latency and other exchange properties might well change over time, exhibit seasonality of various kinds, etc. Super-HFT groups might even make collocation decisions based on these referrals.”

MIT’s Andrew Lo, director of the school’s financial engineering lab, offered a variation of this thesis. He argues that the algorithms are not used to test latency but to probe real market conditions.

“What I think is happening is that there are algorithms that have been designed to monitor the markets and basically create a sort of trolling function to try to identify orders that might be executed and to do it on a regular and relatively systematic basis,” he said.

He compared the algorithms to a “financial radar”.

“I think it’s not random and it’s not hard to figure out what the motive is,” Lo argued. “If you think about how modern radar works, if you didn’t know anything about radar, what you would see is a pattern of electromagnetic radiation emitted during regular interviews, and then you would see patterns of reflections from objects there- low. .It’s a financial radar we see.”

Traders want to place tens of thousands of orders in a very short period of time precisely because they are probing for a split second when a buyer or seller comes along.

“Suppose you want to identify down to the millisecond when an order is placed and at what price. If you want to detect the transaction down to the millisecond, you’re going to have to submit orders faster than that,” Lo said. “The model gives you an idea of ​​the fineness of the mesh that is built to try to capture the first trade that occurs.”

This first trade acts as a forecast for the price action. “If you see an order that went to $1.05 at time T and $1.06 at time T+1, you start betting on that for the next few milliseconds,” Lo explained. “The sooner you can spot the trend, the more money you have to win.”

AON2_072910.png

Lo even came up with a way to test the algorithms to see if he was right about what they were doing. He thinks if you could sneak into the structured trade and accept an offer, they might move to a different phase of operation.

“What would be interesting but potentially expensive to do is when you detect patterns like this, it would be to trigger an order to hit the bid on one of these regular scans and see what happens to the model,” Lo said.

Kearns argued, however, that the kinds of wild order strategies we see are not necessary to probe the market. “What’s weird about these patterns, the jagged patterns, let’s say,” he says, “where you alternate [prices up and down] If I were to explore the idea of ​​a large number of models to see what works, there’s just no need to place orders this far out of the market, especially given how quickly they’re being phased out. ”

The only people who know for sure what is going on in the market are the traders themselves and the exchanges they work on.

At the highest level, however, robot traders offer a unique perspective on the exact speed and complexity of our financial system. Upon discovering this new and seemingly pervasive behavior, it’s not immediately clear how to explain it, even to the brightest minds in the field.

Our regulators have tools designed to price a market measured in seconds, but technology has pushed markets to the level of milliseconds.

“The observation is that it’s not as trivial as it sounds, not so much because there’s anything wrong with high frequency trolling, but rather because the infrastructure regulation that monitors these markets is not designed to deal with this kind of latency and high frequency,” Lo pointed out. “It can create significant problems, not the least of which is the Flash Crash. There are equity issues. There are transparency issues. There are stability issues. We need to resynchronize the regulatory infrastructure with the technology of our time.”

“We see innovations that dramatically increase market speed and throughput, and it works great until it doesn’t,” Lo concluded. “And when you have a problem, like flash crashing, then you’ll have 2.0 and people will fix it. We’re still in the point-something version and there are definitely improvements that need to be made to the infrastructure regulations.

Images: All images courtesy of Nanex. Full explanations of the patterns on their site.

Comments are closed.