News Releases


MOUNTAIN VIEW, Calif., May 13, 2022 /PRNewswire/ -- Flex Logix® Technologies, Inc., supplier of fast and efficient edge AI inference accelerators and the leading supplier of eFPGA IP, announced today that it will be presenting on two Enabling Technology Tracks at the upcoming Embedded Vision Summit in Santa Clara, CA. One presentation will discuss the importance of properly pairing hardware and software to maximize computational performance, while the other will highlight a novel new dynamic TPU array architecture in the Flex Logix InferX™ platform.

Below are details on each of the presentations:

Presentation 1: High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology

  • Speaker: Cheng Wang, Senior Vice President and Co-founder, Flex Logix
  • Abstract: To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in terms of power and cost. This talk will present and demonstrate the novel dynamic TPU array architecture of Flex Logix's InferX X1 accelerators and contrast it to current GPU, TPU and other approaches to delivering the teraops computing required by edge vision inferencing. We will compare latency, throughput, memory utilization, power dissipation and overall solution cost. We'll also show how existing trained models can be easily ported to run on the InferX X1 accelerator.
  • When: Tuesday, May 17th
  • Location: Santa Clara Convention Center, Santa Clara
  • Time: 1:30 – 2:00 pm

Presentation 2: The Flex Logix InferX X1: Pairing Software and Hardware to Enable Edge Machine Learning

  • Speaker: Randy Allen, Vice President of Software, Flex Logix
  • Abstract: Machine learning is not new—the term was first coined in 1952. Its explosive growth over the past decade has not been the result of technical breakthroughs, but instead of available compute power. Similarly, its future potential will be determined by the amount of compute power that can be applied to an ML problem within the constraints of allowable power, area and cost. The key to increasing computation power is properly pairing hardware and software to effectively exploit parallelism. The Flex Logix InferX X1 accelerator is a system designed to fully utilize parallelism by teaming software with parallel hardware that is capable of being reconfigured based on the specific algorithm requirements. In this talk, we will explore the hardware architecture of the InferX X1, the associated programming tools, and how the two work together to form a cost-effective and power-efficient machine learning system.
  • When: Wednesday, May 18th
  • Location: Santa Clara Convention Center, Santa Clara
  • Time: 11:25 – 11:55 am
Visit the Flex Logix Booth

In its booth #205, Flex Logix will be showing demonstrations of the Infer X1 Edge AI Inference accelerator running various inference models and showing industry-leading inference/watt for large complex AI models in both Linux and Windows environments. The X1 accelerator is based on a dynamically reconfigurable TPU architecture that offers GPU class performance at a small fraction of the size and power of traditional GPU offerings. It's available now in multiple form factors including M.2 and PCI Express boards.

About Flex Logix

Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry's most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix's eFPGA platform enables chips to flexibly handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to general purpose processors. Flex Logix is headquartered in Mountain View, California and has offices in Austin, Texas. For more information, visit

Kelly Karr
Tanis Communications

SOURCE Flex Logix Technologies, Inc.