DEVELOPERS

Overview

Bring Your Own AI Models

Empower Your AI Ambitions with DEEPX NPU

Unlock the full potential of any deep learning model with our state-of-the-art DEEPX NPU. Seamlessly integrate various deep learning frameworks through our groundbreaking DEEPX NPU SDK, 'DXNN'.

Introducing DXNN: Your Gateway to AI Excellence

DXNN offers a comprehensive software ecosystem, meticulously designed for our DEEPX AI SoCs, featuring:

  • IQ8™ (Intelligent Quantization Integer 8)
    IQ8 is Intelligent Quantization technology, utilizing Integer 8-bit. In comparison to GPU-based solutions using 32 Floating Point, IQ8 maintains the same level of accuracy or even outperforms them.
  • DX-COM (NPU Compiler)
    At the core of model optimization, it includes a high-performance quantizer for maximum accuracy and efficiency. This powerful tool ensures your models are precisely tuned for optimal NPU inference.
  • DX-RT (NPU Runtime System Software)
    This robust suite provides an API-enabled runtime, dedicated NPU device drivers, and advanced NPU firmware, ensuring seamless operation.

Designed to integrate effortlessly with a wide range of DNN models from AI software giants like TensorFlow, Pytorch, and Caffe, DXNN is your conduit to the forefront of deep learning technology.
Below, you can explore the vertical view of the DEEPX SDK.

sec2_but01

Your GPU-trained
AI Models

arrow_tran
arrow_down_trans
sec2_but02

DXNN Translates
Automatically

arrow_tran
arrow_down_trans
sec2_but03

DEEPX
AI SoCs

Join DEEPX's Exclusive EECP for Direct Access
to Pioneering Hardware and Software Products

Dive into the forefront of AI innovation by enrolling in DEEPX's Early Engagement Program (EECP). As a member, you'll gain firsthand access to not one, but two versions of our groundbreaking AI chip (engineering samples) - BASIC/PRO. Additionally, you'll receive our state-of-the-art software development kit, DXNN, complemented by dedicated technical support to guide you every step of the way.

For more detailed information and to become a part of this exciting journey, please visit the link below. Apply now and embark on a transformative AI experience. Upon application confirmation, a representative from the DEEPX Business team will be in touch within 2-3 days.

Vertical View

Unleash Deep Learning's True Potential with DXNN

• Customer Field

Step into an environment where executing deep learning algorithms is not just efficient but also incredibly intuitive, thanks to DXNN’s advanced software abstraction. Our SDK supports models from all major frameworks and boasts an expansive model zoo, featuring over 180 models. With DXNN, you're not just running algorithms; you're spearheading the next wave of AI innovation.
Embrace your AI vision with DEEPX NPU - the nexus of innovation, precision, and efficiency.

pc_mark
tablet_mark
mobile_mark

• DXNN® DEEPX NPU SDK

table02_pc06
table02_tab06
table02_mo06
icon_left_01

World's Top
Performing Quantizer

DXNN supports automatic quantization of DNN models trained in floating-point format. The AI accuracy of DXNN quantization is almost similar to the level of DNN models in the FP32 bit representation of the GPU or even higher!

icon_right_01

NPU Compiler
For DNN Models

DEEPX compiler compiles the trained inference DNN models to generate binaries for DEEPX NPU. The result is an optimized execution code in terms of accuracy, latency, throughput, and efficiency.

icon_left_02

Supreme Optimizer Streamlines
the Model Inference Process

The optimizer of DXNN can highly reduce an amount of computation without AI accuracy loss. The optimizer of DXNN aggressively substitutes sub-graph with optimized version of the sub-graph, such as fusing operators or exchanging the order of operators.

icon_right_02

User Friendly Host Communication
and Runtime

DEEPX’s runtime API supports commands for model loading, inference execution, passing model inputs, receiving inference data, and a set of functions to manage the devices.

icon_left_01

World's Top Performing Quantizer

DXNN supports automatic quantization of DNN models trained in floating-point format. The AI accuracy of DXNN quantization is almost similar to the level of DNN models in the FP32 bit representation of the GPU or even higher!

icon_right_01

NPU Compiler For DNN Models

DEEPX compiler compiles the trained inference DNN models to generate binaries for DEEPX NPU. The result is an optimized execution code in terms of accuracy, latency, throughput, and efficiency.

icon_left_02

Supreme Optimizer Streamlines
the Model Inference Process

The optimizer of DXNN can highly reduce an amount of computation without AI accuracy loss. The optimizer of DXNN aggressively substitutes sub-graph with optimized version of the sub-graph, such as fusing operators or exchanging the order of operators.

icon_right_02

User Friendly Host Communication and Runtime

DEEPX’s runtime API supports commands for model loading, inference execution, passing model inputs, receiving inference data, and a set of functions to manage the devices.

deepx_logo_white

(USA ) 3003 North First Street #316, San Jose, CA 95134
(SOUTH KOREA) 5F, 20, Pangyoyeok-ro 241beon-gil, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea
E-mail : info@deepx.ai

Copyright 2024 DEEPX. All Rights Reserved.