(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40783598

Edge Impulse 是一个专门针对各种微控制器、CPU、神经形态和边缘 GPU 等专用加速器优化深度学习、计算机视觉和数字信号处理工作负载的平台。 他们的目标是使机器学习模型可以部署在资源受限的设备上。 用户可以上传 TensorFlow、PyTorch 或 Jax 模型并接收返回的优化 C++ 库,准备进行集成。 此外,他们的工作室还提供针对不同嵌入式硬件定制的训练模型工具,并提供硬件感知的超参数调整。 一位用户为 ESP32-S3 芯片实现了 FAST 特征检测器的 SIMD 优化版本,从而显着提高了吞吐量。 这一增强功能使 ESP32-S3 能够高效处理实时计算机视觉任务,例如 30fps VGA 流。 FAST 算法是用于目标检测的加速分段测试功能的一部分,可应用于机器人和自动驾驶汽车等领域。 在经济实惠的芯片领域,Neon 是 Pi Zero 和更广泛的系统中存在的可选 SIMD 指令集扩展。 Orrin Nano拥有40TOPS的不俗计算能力,据了解,适合Google的Copilot。 最近,使用类似于 TPU 的方法将红外信号转换为可见光方面取得了进展,为传感器和成像技术的新可能性打开了大门。 PicoVGA 库支持 Raspberry Pi Picos 上的 VGA 显示功能,同时继续努力从头开始设计用于 SIMD 计算的高效算法。

相关文章

原文


>For silicon that's cheaper than the average coffee, that's pretty cool.

Maybe it's not the chip that it's too cheap. Maybe it's the coffee that's too expensive.



OTOH, I've been waiting for disposable coffee cups with OLED-based video ads ever since Minority Report. But tech progress is just too damn slow :P



I wish but tbh coffee is probably artificially cheaper than it really should be since larger corporations exploit local farms and effectively maintain local monopolies where farms have to sell to said corporations for a fraction of the price it's actually worth.



> Maybe it's the coffee that's too expensive.

Ha, well, there is a disturbing reason why computer vision with ultra-cheap hardware is possible: countries all over the world are buying these by the billions in order to keep an eye on their citizens :-(

Big brother is enabling incredible economies of scale....



If you're interested in this stuff and wanna try it yourself, check out our product, Edge Impulse:

https://edgeimpulse.com/ai-practitioners

We work directly with vendors to perform low level optimization of deep learning, computer vision, and DSP workloads for dozens of architectures of microcontrollers and CPUs, plus exotic accelerators (neuromorphic compute!) and edge GPUs. This includes ESP32:

https://docs.edgeimpulse.com/docs/edge-ai-hardware/mcu/espre...

You can upload a TensorFlow, PyTorch, or JAX model and receive an optimized C++ library direct from your notebook in a couple lines of Python. It's honestly pretty amazing.

And we also have a full Studio for training models, including architectures we've designed specifically to run well on various embedded hardware, plus hardware-aware hyperparameter optimization that will find the best model to fit your target device (in terms of latency and memory use).



Thank you! We're trying to bring embedded ML in reach of all engineering teams and domain experts.

Previously you needed a crazy mixture of ML knowledge and low-level embedded engineering skills even to get started, which is not a common occurrence!



tinyml fascinates me because its principles can be directly applied to web-based applications imho.

micropython seems pretty accessible from first glance. would it be easy to create a webassembly port of its code?



> As I've been really interested in computer vision lately, I decided on writing a SIMD-accelerated implementation of the FAST feature detector for the ESP32-S3 [...]

> In the end, I was able to improve the throughput of the FAST feature detector by about 220%, from 5.1MP/s to 11.2MP/s in my testing. This is well within the acceptable range of performance for realtime computer vision tasks, enabling the ESP32-S3 to easily process a 30fps VGA stream.

What are some use cases for FAST?

Features from accelerated segment test: https://en.wikipedia.org/wiki/Features_from_accelerated_segm...

Is there TPU-like functionality in anything in this price range of chips yet?

Neon is an optional SIMD instruction set extension for ARMv7 and ARMv8; so Pi Zero and larger have SIMD extensions

Orrin Nano have 40 TOPS, which is sufficient for Copilot+ AFAIU. "A PCIe Coral TPU Finally Works on Raspberry Pi 5" https://news.ycombinator.com/item?id=38310063

From https://phys.org/news/2024-06-infrared-visible-device-2d-mat... :

> Using this method, they were able to up-convert infrared light of wavelength around 1550 nm to 622 nm visible light. The output light wave can be detected using traditional silicon-based cameras.

> "This process is coherent—the properties of the input beam are preserved at the output. This means that if one imprints a particular pattern in the input infrared frequency, it automatically gets transferred to the new output frequency," explains Varun Raghunathan, Associate Professor in the Department of Electrical Communication Engineering (ECE) and corresponding author of the study published in Laser & Photonics Reviews.

"Show HN: PicoVGA Library – VGA/TV Display on Raspberry Pi Pico" https://news.ycombinator.com/item?id=35117847#35120403 https://news.ycombinator.com/item?id=40275530

"Designing a SIMD Algorithm from Scratch" https://news.ycombinator.com/item?id=38450374



Thanks for reading!

> What are some use cases for FAST?

The FAST feature detector is an algorithm for finding regions of an image that are visually distinctive, which can be used as a first step in motion tracking and SLAM (simultaneous localization and mapping) algorithms typically seen in XR, robotics, etc.

> Is there TPU-like functionality in anything in this price range of chips yet?

I think that in the case of the ESP32-S3, its SIMD instructions are designed to accelerate the inference of quantized AI models (see: https://github.com/espressif/esp-dl), and also some signal processing like FFTs. I guess you could call the SIMD instructions TPU-like, in the sense that the chip has specific instructions that facilitates ML inference (EE.VRELU.Sx performs the ReLU operation). Using these instructions will still take away CPU time where TPUs are typically their own processing core, operating asynchronously. I’d say this is closer to ARM NEON.



> The FAST feature detector is an algorithm for finding regions of an image that are visually distinctive, …

Is that related to ‘Energy Function’ in any way?

(I ask because a long time ago I was involved in an Automated Numberplate Reading startup that was using an FPGA to quickly find the vehicle numberplate in an image)



What you are thinking of operates at a different level of abstraction. Energy functions are a general way of structuring a problem, used (sometimes abused) to apply an optimization algorithm to find a reasonable solution for it.

FAST is an algorithm for efficiently looking for "interesting" parts (basically, corners) of an image, so you can safely (in theory) ignore the rest of it. The output from a feature detector may end up contributing to an energy function later, directly or indirectly.



> Is there TPU-like functionality in anything in this price range of chips yet?

Kendryte K210 supports 1x1 and 3x3 convolutions on the "TPU". It was pretty well supported in terms of software & documentation but sadly it hasn't become popular.

These days, you can easily find cheap RV1103 ("LuckFox"), BL808 ("Ox64/Pine64") and CV1800B/SG20002 ("MilkV") based dev boards, all of which have some sort of basic TPU. Unfortunately, they are designed to be linux boards meaning that all TPU related stuff is extremely abstracted with zero under-the-hood documentation. So it's absolutely unclear whether their TPUs are real or faked with clever code optimizations.



> These days, you can easily find cheap RV1103 ("LuckFox"), BL808 ("Ox64/Pine64") and CV1800B/SG20002 ("MilkV") based dev boards, all of which have some sort of basic TPU. Unfortunately, they are designed to be linux boards meaning that all TPU related stuff is extremely abstracted with zero under-the-hood documentation. So it's absolutely unclear whether their TPUs are real or faked with clever code optimizations.

They all have TPU in hardware, my team has been verifying and benchmarking them. Documentation is only available for the high-level C APIs to the libraries that a programmer is expected to use, and even that tends to be extremely lacking.



I wonder how hard it would be, presumably with some trade-off with detection windows, to use a few of these in parallel and process higher resolutions and frame rates?



Compared to ESP8266, there's generally pretty good ESP32 support for Rust, but you'll likely need to use in your C++ toolchain if you want to use the standard library. no-std in Rust for ESP32 isn't terrible in my experience, though, just not as fleshed out - particularly for hooking into components like wifi/networking and probably a camera as well.

Like the other commenter said, there's plenty of support for SIMD and asm in Rust.

You might ask around on a Rust embedded or Rust ESP32 chatroom before making the dive.



You can actually use the IDF system in Rust to use the std lib, at least on ESP32-C3. Probably others too.

If you are on Windows, you will need to place the project folder at the top level drive directory, and there are other quirks as well, but it works.



I don't think that applies to the ESP32 family of devices. I've never heard of DSP hardware onboard them.

I think the comment you're referring to is talking about the architecture in general, but not the silicon we're discussing here.



ESP32 ee.* operations in assembly look pretty much like aliases for a VLIW bundles, on the same cycle issuing loads used in the next op while also doing multiplication on other operands. This is not a minimal Xtensa. They might not have the Tensilica toolchain for redistribution to use these features freely but apparently they exposed these extensions in their assembler in some form.



Generally speaking, this is not correct. Base Xtensa is not VLIW, but Xtensa's various vector extensions do allow VLIW instructions, collectively called "FLIX."

It is doubtful that ESP32's Xtensa is VLIW-capable, though. Presumably their compiler would emit FLIX instructions if it were.



More expensive sure. But better is pretty rich considering it is Intel. My money is on this platform just evaporating in the next 5 years. Esp32 has proven you can rely on supply and longevity.



Arguably the UP^2 is another class of device. Up to 8 GB of RAM and up to 128 GB of storage + a whole x86 CPU with dual gigabit LAN.

And the price, size and power consumption are also quite a bit higher but it will certainly grant a better general compute environment, if you want to run Linux or smth.



It's very easy to use any pytorch or tensorflow packages, or open3d, pcl, librealsense or similar vision packages. Powerful enough to do realtime vision tasks, which you certainly cannot do with 2€ boards.

联系我们 contact @ memedata.com