将MacBook变成触摸屏,只需1美元的硬件 (2018)
Turning a MacBook into a touchscreen with $1 of hardware (2018)

原始链接: https://anishathalye.com/macbook-touchscreen/

## 西斯廷计划:只需1美元为MacBook实现触屏 “西斯廷计划”利用价值1美元的硬件——一个小镜子——和计算机视觉技术,成功地将MacBook变成触屏设备。该团队(凯文、吉列尔莫和洛根)受到中学项目关于反射指示触摸的观察启发,在16小时内构建了一个原型。 该系统通过放置镜子,使MacBook的网络摄像头能够以一定角度观察屏幕,从而检测手指触摸产生的反射。软件随后处理网络摄像头馈送,通过轮廓检测和肤色过滤识别手指,并区分悬停和实际触摸。 一个校准过程,要求用户触摸特定点,建立一个“单应性”——网络摄像头坐标与屏幕位置之间的映射。这使得系统能够将检测到的触摸转换为鼠标事件,从而立即在现有应用程序中启用触屏功能。 目前,这是一个概念验证项目。改进,例如使用更高分辨率的网络摄像头和弯曲的镜子,可以使西斯廷成为一种实用且低成本的触屏解决方案。该开源项目以MIT许可证发布,展示了简单硬件和巧妙软件的潜力。

## MacBook 触摸屏破解与讨论 (Hacker News 摘要) 最近 Hacker News 的讨论集中在一个 2018 年的项目上,该项目仅使用内置摄像头和 1 美元的硬件,展示了 MacBook 的触摸屏界面。该项目利用计算机视觉检测触摸点,引发了关于笔记本电脑上触摸屏的实用性和是否必要性的争论。 许多评论员,包括史蒂夫·乔布斯(引用了他 2010 年的一段话),对笔记本电脑触摸屏的人体工程学和实用性表示怀疑,并提到了“猩猩臂”疲劳问题。其他人分享了他们使用触摸屏笔记本电脑的经验,指出尽管在绘画或休闲浏览等特定任务中可能存在潜在好处,但实际使用有限。 对话还涉及苹果可能发布触摸屏 MacBook 的可能性,意见不一。一些人担心屏幕质量会受到影响以及指纹污渍,而另一些人则承认对于习惯了在 iPad 上使用触摸界面的用户来说,可能存在潜在好处。一个反复出现的主题是精确输入(鼠标/触控板)与触摸的直观性之间的权衡。最终,这场讨论凸显了关于笔记本电脑最佳界面长期以来的争论,以及苹果传闻的举动是否预示着用户期望的转变。
相关文章

原文

We turned a MacBook into a touchscreen using only $1 of hardware and a little bit of computer vision. The proof-of-concept, dubbed “Project Sistine” after our recreation of the famous painting in the Sistine Chapel, was prototyped by me, Kevin, Guillermo, and Logan in about 16 hours.

The basic principle behind Sistine is simple. Surfaces viewed from an angle tend to look shiny, and you can tell if a finger is touching the surface by checking if it’s touching its own reflection.

Kevin, back in middle school, noticed this phenomenon and built ShinyTouch, utilizing an external webcam to build a touch input system requiring virtually no setup. We wanted to see if we could miniaturize the idea and make it work without an external webcam. Our idea was to retrofit a small mirror in front of a MacBook’s built-in webcam, so that the webcam would be looking down at the computer screen at a sharp angle. The camera would be able to see fingers hovering over or touching the screen, and we’d be able to translate the video feed into touch events using computer vision.

Our hardware setup was simple. All we needed was to position a mirror at the appropriate angle in front of the webcam. Here is our bill of materials:

  • Small mirror
  • Rigid paper plate
  • Door hinge
  • Hot glue

After some iteration, we settled on a design that could be assembled in minutes using a knife and a hot glue gun.

Here’s the finished product:

The first step in processing video frames is detecting the finger. Here’s a typical example of what the webcam sees:

The finger detection algorithm needs to find the touch/hover point for further processing. Our current approach uses classical computer vision techniques. The processing pipeline consists of the following steps:

  1. Filter for skin colors and binary threshold
  2. Find contours
  3. Find the two largest contours and ensure that the contours overlap in the horizontal direction and the smaller one is above the larger one
  4. Identify the touch/hover point as the midpoint of the line connecting the top of the bottom contour and the bottom of the top contour
  5. Distinguish between touch and hover based on the vertical distance between the two contours

Shown above is the result of applying this process to a frame from the webcam. The finger and reflection (contours) are outlined in green, the bounding box is shown in red, and the touch point is shown in magenta.

The final step in processing the input is mapping the touch/hover point from webcam coordinates to on-screen coordinates. The two are related by a homography. We compute the homography matrix through a calibration process where the user is prompted to touch specific points on the screen. After we collect data matching webcam coordinates with on-screen coordinates, we can estimate the homography robustly using RANSAC. This gives us a projection matrix that maps webcam coordinates to on-screen coordinates.

The video above demonstrates the calibration process, where the user has to follow a green dot around the screen. The video includes some debug information, overlaid on live video from the webcam. The touch point in webcam coordinates is shown in magenta. After the calibration process is complete, the projection matrix is visualized with red lines, and the software switches to a mode where the estimated touch point is shown as a blue dot.

In the current prototype, we translate hover and touch into mouse events, making existing applications instantly touch-enabled.

If we were writing our own touch-enabled apps, we could directly make use of touch data, including information such as hover height.

Project Sistine is a proof-of-concept that turns a laptop into a touchscreen using only $1 of hardware, and for a prototype, it works pretty well! With some simple modifications such as a higher resolution webcam (ours was 480p) and a curved mirror that allows the webcam to capture the entire screen, Sistine could become a practical low-cost touchscreen system.

Our Sistine prototype is open source, released under the MIT License.

联系我们 contact @ memedata.com