Arduino UNO Q with Qualcomm AI Chip: Enabling Next-Generation Edge Intelligence for Embedded AI Prototyping

Authors

  • Yi-Sheng Hsiao Institute of Electro-Optical Engineering, National Yang Ming Chiao Tung University, Taiwan
  • Swarnajit Bhattacharya Department of Electrical and Computer Science Engineering, National Yang Ming Chiao Tung University, Taiwan
  • Asim Halder Department of Applied Electronics and Instrumentation Engineering, Haldia Institute of Technology, India

DOI:

https://doi.org/10.70112/ajes-2025.14.2.4279

Keywords:

Dual-Processor Architecture, AI Inference, Benchmark Analysis, Memory Bandwidth Real-Time Control

Abstract

The Arduino UNO Q introduces a novel dual-processor heterogeneous architecture, combining a Qualcomm Dragonwing QRB2210 microprocessor with a real-time STM32U585 microcontroller. The QRB2210 features a quad-core 64-bit Arm Cortex-A53 CPU (2.0 GHz) with an Adreno 702 GPU (845 MHz), delivering significant computational improvements over legacy Arduino platforms. Benchmark analysis reveals that the UNO Q achieves a 12.5× throughput improvement over the Arduino UNO R3 (16 MHz) and a 4.2× improvement over the UNO R4 WiFi (48 MHz). The memory architecture shows a 1,048,576× increase in SRAM relative to the UNO R3, with 2 GB of LPDDR4X enabling complex AI inference. Peak memory bandwidth reaches 2.4 MB/ns, compared to 0.32 MB/ns on the UNO R3. The dual-brain architecture enables real-time deterministic control via the STM32U585 subsystem, while leveraging GPU acceleration for TensorFlow Lite inference with sub-100ms latency. This work examines the architectural innovations and practical implications for edge AI, IoT, and robotics, which require both high-performance computing and real-time response guarantees in resource-constrained environments.

References

[1] H. K. Kondaveeti, N. K. Kumaravelu, S. D. Vanambathina, et al., “A systematic literature review on prototyping with Arduino,” Computer Science Review, vol. 40, p. 100364, 2021.

[2] N. K. Prabowo, et al., “The implementation of Arduino microcontroller boards in science: A bibliometric analysis from 2008 to 2022,” Journal of Engineering Education Transformations, vol. 37, no. 2, pp. 106–123, 2023.

[3] M. Zhu, B. Song, and S. Wang, “Edge AI: On-device machine learning for mobile and IoT devices,” Journal of Systems Architecture, vol. 115, p. 101964, 2021.

[4] C. R. Banbury, et al., “MLPerf Tiny benchmark,” in Proc. 4th MLSys Conf., 2021.

[5] M. Abadi, et al., “TensorFlow: Large-scale machine learning on heterogeneous systems,” in Proc. 12th USENIX Symp. Operating Systems Design and Implementation, 2016.

[6] P. Premalatha and S. Singh, “Design and development of automatic seed sowing machine,” Asian Journal of Electrical Sciences, vol. 8, suppl. 1, pp. 51–54, 2019, doi: 10.51983/ajes-2019.8.S1.2307.

[7] Qualcomm Technologies Inc., “Qualcomm to acquire Arduino-Accelerating developers’ access to its leading-edge computing and AI,” Press Release, Oct. 2025.

[8] Y. C. Lin, et al., “Qualcomm Dragonwing platform specifications and technical documentation,” Qualcomm Dragonwing Developer Resources, 2024.

[9] X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An extremely efficient convolutional neural network for mobile devices,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2018, pp. 6848–6856.

[10] G. Howard, et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017.

[11] M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling for convolutional neural networks,” in Proc. Int. Conf. Machine Learning, 2019.

[12] Puthillath, A. Jose, A. S. Kumar, M. S. Shibin, and S. Johnson, “Recreation of conventional Tonga,” Asian Journal of Electrical Sciences, vol. 10, no. 2, pp. 1–5, 2021, doi: 10.51983/ajes-2021.10.2.2948.

[13] L. Stäcker, J. Fei, P. Heidenreich, et al., “Deployment of deep neural networks for object detection on edge devices with runtime optimization,” in Proc. IEEE/CVF Int. Conf. Computer Vision Workshops, 2021.

[14] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2016, pp. 779–788.

[15] OpenCV Team, “OpenCV: Open-source computer vision library,” Computer Vision and Machine Learning Software Library, 2024.

[16] Y. L. Boureau, et al., “Learning convolutional neural networks for graphs,” in Advances in Neural Information Processing Systems, 2014.

[17] Zhou, et al., “Learning deep features for discriminative localization,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2016, pp. 2921–2929.

[18] O. Horyachyy, “Comparison of wireless technologies used in a smart home,” M.S. thesis, KTH Royal Institute of Technology, Stockholm, Sweden, 2017.

[19] Saha, K. Kumar, R. Jesmin, Satya, and V. Gupta, “Intelligent greenhouse monitoring system (IGMS) integrated with GSM technology,” Asian Journal of Electrical Sciences, vol. 8, no. 1, pp. 40–43, 2019, doi: 10.51983/ajes-2019.8.1.2334.

[20] M. C. du Plessis, et al., “Bringing computer vision to the edge: An overview of real-time image analytics with SAS Event Stream Processing,” in Proc. SAS Global Forum, 2020.

[21] Analog Devices, “MAX78000 ultra-low power convolutional neural network accelerator,” Datasheet and Application Notes, 2023.

[22] S. Bhattacharya, “Monitoring and removal of fake product review using machine learning,” Research Square, vol. 7, no. 12, p. 7, 2023, doi: 10.21203/RS.3.RS-2818111/V1.

[23] L. N. Smith and N. Topin, “Super-convergence: Very fast training of neural networks using large learning rates,” in Proc. Int. Conf. Artificial Intelligence and Statistics, 2019.

[24] M. Tan, Q. V. Le, and E. D. Cubuk, “EfficientDet: Scalable and efficient object detection,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, 2020, pp. 10781–10790.

[25] F. N. Iandola, et al., “SqueezeNet: AlexNet-level accuracy with 50× fewer parameters and <0.5 MB model size,” arXiv preprint arXiv:1602.07360, 2016.

[26] H. Li, A. Kadav, I. Durdanovic, et al., “Pruning filters for efficient ConvNets,” in Proc. Int. Conf. Learning Representations, 2017.

[27] S. Bhattacharya, A. K. Nayek, A. Biswas, M. Sen, and A. Halder, “An advanced Internet of Things-based water quality monitoring architecture for sustainable aquaculture leveraging long range wide area network communication protocol,” ES General, Oct. 9, 2025, doi: 10.30919/esg1780.

[28] Edge Impulse Team, “Edge Impulse documentation: TensorFlow Lite integration and model optimization,” Platform Documentation, 2024.

[29] Arduino, “Arduino Uno user manual,” [Online]. Available: https://docs.arduino.cc/tutorials/uno-q/user-manual/

[30] S. Bharathidasan, Y. Farisha, R. Harisoothanakumar, U. Rajeshwari, and J. Suryaprabha, “Automatic corporation water supply control using Arduino,” Asian Journal of Electrical Sciences, vol. 8, no. 1, pp. 36–39, 2019, doi: 10.51983/ajes-2019.8.1.2335.

Downloads

Published

12-10-2025

How to Cite

Hsiao, Y.-S., Bhattacharya, S., & Halder, A. (2025). Arduino UNO Q with Qualcomm AI Chip: Enabling Next-Generation Edge Intelligence for Embedded AI Prototyping. Asian Journal of Electrical Sciences, 14(2), 6–20. https://doi.org/10.70112/ajes-2025.14.2.4279

Similar Articles

<< < 4 5 6 7 8 9 10 11 12 13 > >> 

You may also start an advanced similarity search for this article.