The head background

-
Chip Utilization Ratio up to 94%
-
3x Increased in Energy Efficiency Ratio
-
5ms Latency

Nebula Accelerator NA-100c
High performance AI inference acceleration board designed for for edge and backend devices
1.64 TOPS
PCIe 3.0 x8
ResNet-50
4.87 ms
205.7 FPS
ResNet-101
8.76 ms
114.2 FPS
VGG16
21.49 ms
46.5 FPS
Inception-V4
17.87 ms
55.9 FPS
Yolo V3
38.48 ms
25.9 FPS
SSD-FPN
113.33 ms
8.8 FPS
*KY-SSD
2.97 ms
337.5 FPS
*U-Net
445.18 ms
2.2 FPS
Note: Batch=1, INT8 The above CNN models are created by TensorFlow framework, *KY-SSD and *U-Net are custom CNN networks


Rainman Accelerator
High performance AI inference acceleration board designed for frontend devices
102.4 GOPS
Gigabit Ethernet interface
7.0 ~ 8.5 W

Advantages of "Nebula" and "Rainman"
-
High
Performance200FPS,
16-channel real-time detection of single card
-
Low
Latency10 ms
-
Low Power
ConsumptionCAISA architecture,
10 times energy efficiency ratio Increased