- Ultra-low latency at batch 1 means real-time AI insights of streaming datasets for financial services industry.
- Near-linear scalability and maintaining low latency and throughput performance on GroqNode™ minimizes value loss as systems scale to accommodate increasing data volume.
- Groq development environment simplifies and exceeds industry-average time-to-production using a software-first, generalized compiler approach.
MOUNTAIN VIEW, Calif., Nov. 1, 2022 /PRNewswire/ -- Today, the Securities Technology Analysis Center (STAC®) published audited benchmarking results from Groq for the financial industry, showcasing ultra-low latency, especially at low batch sizes such as batch 1. Over the last few years, the financial services industry has been asking vendors to show performance numbers on market-specific workloads. Amongst the compute incumbents in the space, Groq is the first to announce STAC-ML audited benchmark results, implemented on a GroqNode™ server that includes eight GroqCard™ 1 Accelerators and two AMD EPYC™ 7413 processors.
The results showed extremely low long short-term memory (LSTM) latencies, such as LSTM_A (small model size) with a median latency of 0.054ms per inference when running one model instance, and GroqNode throughput of 471,585 inferences per second when running eight model instances in parallel. These repeatable results, possible with Groq's deterministic compute, demonstrate unparalleled performance regarding both latency and throughput. The impact for the financial services industry is more accurate pricing predictions for real-time trading and risk analysis using machine learning (ML) models.
Amr El-Ashmawi, VP of Vertical Markets at Groq, commented, "There are many factors that can fluctuate the pricing of equities. News feeds bearing good or bad news about the current or future prospects of the market is arguably one of the most influential factors driving daily price fluctuations. Using ML models such as RNN/LSTMs can help forecast equity pricing, reducing portfolio risk."
The results also demonstrate that when it comes to LSTMs, an example of time series AI model where past data is as significant as current data, there is no need for developers to compromise model functionality nor complexity to get performance. Groq demonstrated the simplicity of porting STAC's models using a software-first generalized compiler approach to the development environment. With just a single line of code, developers can port numerous existing PyTorch or TensorFlow models, dramatically simplifying and accelerating the ML development process.
Peter Nabicht, President of STAC, commented, "STAC benchmark tests are specified by customers based on their business needs. Financial firms in the STAC Benchmark Council collaborated on this version of STAC-ML because low latency inference is a growing necessity, often in environments with constrained power and space. We thank Groq for their leadership in being the first vendor to provide the industry with needed data on performance, quality, and efficiency."
Come meet with Groq at the STAC Summit in London on November 10 at the Leonardo Royal Hotel London City and at Supercomputing 22 in Dallas from November 13-18. See the full list of events Groq will be present at, click here and for the latest company news and updates, follow us on LinkedIn and Twitter.
Headquartered in Mountain View, CA with geo-agnostic teams across the US, Canada, and the UK, Groq's innovative deterministic single-core Tensor Streaming Processor architecture lays the foundation for its compiler's unique ability to predict exactly the performance and compute time of workloads while delivering uncompromised low latency. Groq has raised $367 million, with Series C funding co-led by D1 Capital and Tiger Global Management. For more information, visit www.groq.com.
"STAC" and all STAC names are trademarks or registered trademarks of the Securities Technology Analysis Center, LLC.
 SUT ID GROQ221014, www.STACresearch.com/GROQ221014