News Releases

Flex Logix To Speak at THE 2021 LINLEY Spring Processor Forum on Two AI INference Panels

MOUNTAIN VIEW, Calif., April 19, 2021 /PRNewswire/ -- Flex Logix® Technologies, Inc., supplier of the fastest and most-efficient AI edge inference accelerator and the leading supplier of eFPGA IP, announced today that its executives will be presenting on Day 4 and Day 5 of the 2021 Linley Spring Processor Forum. Topics will include high performance inference for power constrained applications as well as the critical need for software to maximize throughput, accuracy, and power and inference solutions.   

Following are the details around each Flex Logix presentation. For more information or to view the presentations after they are presented, please visit the Flex Logix website.

Session 6: Edge-AI Software 
Presentation title:  Why Software Is Critical for AI Inference Accelerators
Speaker:
Jeremy Roberson, Technical Director and AI Inference Software Architect, Flex Logix
Date: Thursday, April 22
Time: 8:30 am – 10:30 am PT
Summary:  In this presentation, Jeremy will discuss the importance of software in maximizing the throughput, accuracy, and power of an AI inference accelerator. He will examine how codeveloping software with hardware allows for architecture tradeoffs that maximize throughput/power for customer models. The software compiler must seamlessly translate data into meaningful results without knowledge of the hardware inner workings. Finally, he will discuss how with the continued evolution of CNN models, software adaptability will continue to drive throughput/power/cost improvements for broader adoption of AI functionality.

Session 9: Efficient AI Inference 
Presentation title:  High Performance Inference for Power Constrained Applications
Speaker:
Cheng Wang, Sr. VP, Software Architecture Engineering, Flex Logix
Date: Friday, April 23
Time: 10:10 am11:40 am PT
Summary:  In this presentation, Cheng Wang will discuss AI inference solutions for power constrained applications, such as edge gateways, networking towers, and medical imaging devices. He will begin with the set of considerations for hardware deployment, as these applications have lower thermal constraints and usually do not have space for a full size PCIe card. This will lead into a brief overview of the M.2 form factor, where he will then talk about the role of an M.2 inference accelerator in the system designs for such applications.

About Flex Logix
Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry's fastest and most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix's eFPGA platform enables chips flexibly handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to processors. Flex Logix is headquartered in Mountain View, California with offices as well in Austin, Texas. For more information, visit https://flex-logix.com

MEDIA CONTACTS
Kelly Karr
Tanis Communications
kelly.karr@taniscomm.com
+408-718-9350

Copyright 2021. All rights reserved. Flex Logix is a registered trademark and InferX is a trademark of Flex Logix, Inc.

SOURCE Flex Logix Technologies, Inc.