Cloud Quality AI at the EdgePowered by Innovative Hardware & Intelligent Software

Made for: AI-Enabled Product and Solution Providers, Developers and Researchers
Featuring Complete AI solutions capable  of processing:
  • Any type of Neural Net
  • Any input resolution
  • Any Neural Net size
Status Quo

There is a lack of higher functioning, quality Edge AI products in the market today. Some of the HW solutions just simply cannot run larger, higher quality networks. For others, there’s a trade-off between image resolution and network size, limiting the quality of the AI functions. The ones that can run higher quality networks are more expensive and consume too much power. Most cannot run multiple networks concurrently to facilitate more complex functions. As a result, what we see today are rather primitive AI enabled products.

Our Approach

At DeGirum, we exploited new dimensions in HW and SW design to develop an AI solution that provides significant performance benefits, without constraining the size or type of the network. Our key technology features include:
- Optimal HW for Pruned Networks
- Reconfigurable Data Processing Units
- Memory Conserving Network Compiler
- Deployment-Ready Software Stack

Our Technology

Empirical evidence shows that a large fraction of the connections in neural networks can be pruned without impacting the accuracy of the network. Our compute architecture benefits from the smaller storage and data bandwidth requirement of the pruned networks. One of the differentiating factors of our technology is our ability to skip computations involving the pruned connections, which translates to very significant performance and energy use benefits. For example, a network layer that is 80% sparse can have up to 5X reduction in compute time.


Our technology opens a new world of possibilities for AI solution providers and developers. It is uniquely suited to build AI hubs that process networks of different topologies in the context of a more complex application. For example, a product capable of concurrently running multiple streams of object detection using complex networks such as YOLO V3, speech recognition with DeepSpeech, and face recognition with ArcFace can be enabled using our single chip solution at just a few watts. Please contact us for more details on the many advantages of our platform.

Build with DeGirum

Our test chip will be available for sampling by the end of 2020. We invite solution providers interested in early engagement to kick start their development using our FPGA platform. Our flexible SW allows porting models from standard frameworks such as TensorFlow, PyTorch and ONNX. Developers can contact us to request access to our ready-to-deploy SW stack. Researchers, long frustrated by lack of HW capable of taking advantage of pruned networks, can now design better network architectures that balance network complexity, performance and accuracy. We encourage researchers to evaluate our pruned models and share their own models on our GitHub page.

Contact Us