(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=44049926

Hacker News的一个帖子讨论了一篇关于利用卫星估算深度及其潜在应用的文章。评论者提出了超越简单阴影计算的用途,包括:估算储油罐中的油位、用建筑物高度增强OpenStreetMap数据以及评估建筑物遭受灾害的脆弱性。 其他想法包括测量树冠高度以进行森林监测、用于保险的洪水深度分析以及识别飞机潜在的紧急着陆点。评论中也提到了为此目的使用合成孔径雷达(SAR)。 一些人认为这项技术可用于城市热岛分析、监测水坝等基础设施的结构变化,甚至用于战争,特别是打击区域建模。 一位评论者指出,基于机器学习的深度估计已被其他方法 largely superseded(大部分取代),但另一些人认为它可以用于将二维图像转换为3D VR 内容,尤其是在色情内容方面。最后,一位评论者质疑这项技术能否改进Google Earth和Microsoft Flight Simulator中的3D建模,尤其是在较贫穷国家的 terrain(地形)和城市方面。

相关文章
  • 2025-05-20
  • (评论) 2025-05-17
  • (评论) 2025-05-18
  • 2025-05-15
  • (评论) 2025-05-17

  • 原文
    Hacker News new | past | comments | ask | show | jobs | submit login
    Satellites Spotting Depth (marksblogg.com)
    99 points by marklit 2 days ago | hide | past | favorite | 29 comments










    This blog is great! I follow him since long time and I always find really interesting contents. Kudos to the author


    There must be some cool application for this but I can't think of what. I guess computing shadows and things like that but we often already have 3d buildings (though maybe not for rural areas like this does).


    An interesting application of shadow/depth detection is estimating the level of oil in those giant circular storage tanks..!

    https://medium.com/planet-stories/a-beginners-guide-to-calcu...



    This type of tinkering with data and imagery is so satisfying. Wish I had more opportunities to chase stuff like this in my life!


    OpenStreetMap often has building outlines, but not building height. This would be a nice way to augment that data for visualisations (remember: OSM doesn't take auto-generated bot updates, so don't submit that to the primary source).


    It does have building height. That's why flightsim 2020 had those weird spikes all over the place, people putting "999" (or similar) as height on OSM.


    Similar to the flood analysis others have mentioned, this can be used to create databases of buildings with the number of stories for each, which is important for understanding how each building will respond to various catastrophes (earthquakes, strong winds, etc.) in addition to various non-catastrophe administrative tasks. The other post about finding the depth of oil in oil tanks is actually super interesting to me because the amount of oil in the tank is a huge determinant of how it will respond to seismic ground motions. I had no idea the top sinks with the oil level and am skeptical that it does on all of the tanks but it's cool nonetheless.


    They pretty much all do by design, it prevents vapours from building up at the top of the tank which is a fire/explosion hazard.

    It works even better with high resolution synthetic aperture radar as you can measure the tank height displacement directly: https://www.iceye.com/blog/daily-analysis-and-forecast-of-gl...



    Measuring tree "depth" (ie canopy height) is a critical tool for conservation biology to monitor the world's forests. We already do this using remotely sensed data correlated against ground truth, which relies on specific optical reflectance characteristics associated with plant biology. But this technique is more general and works only on the spatial structure of the image itself, meaning this could potentially lead to more ubiquitous forest monitoring.


    Measuring the depth of floods. There’s a commercial product being sold to insurance companies doing this right now for quick and dirty impact assessments.


    Interesting, surprised they are using optical data for this instead of synthetic aperture radar. SAR (and in particular interferometric SAR, although that requires short repeat cycle) shines in this area, and a lot of the data is free.

    ESA provides worldwide 20m x 5m radar imagery from Sentinel-1 free online. Revisit in the mid-latitudes is generally a few times per week, with an exact repeat cycle every 12 days. Once Sentinel-1C is fully operational, it'll be half that.



    Trying to find emergency landing spots for planes from any position and speed? I'm not sure if planes' computers already (continuously) provide this to pilots: "here are the top 5 landing spots in this and that contingency"

    Might be good info to plan safer routes ahead of time too



    > I'm not sure if planes' computers already (continuously) provide this to pilots: "here are the top 5 landing spots in this and that contingency"

    No they don't. For airliners it doesn't really matter. The only place they can set down safely is an airport. Which are already listed in their systems and flight plan (alternates)

    For the smaller stuff it depends on the pilot, a common electronic flight system like the Garmin G1000 doesn't have sensors to actually make that determination.



    What about freeways? dry lake beds? the hudson river?


    Yeah but the determination of safety is pretty difficult to do and it's extremely rare for this to happen safely. Take for example the Gimli Glider. That was an actual airport though defunct and from a distance it looked fine but in the end it turned out there was a race going on. It was only luck that people managed to get out of the way in time.

    Could an automated system make a better determination than a skilled pilot? And is the scenario frequent enough to warrant the big cost of cameras etc (keeping in mind they must be stabilized and with huge aperture to function at night). I doubt it.

    The "miracle on the Hudson" was not called a miracle for nothing. Usually it ends like a few months ago at Washington Reagan.

    And a freeway is never a safe place to land an airliner of course. The traffic makes it so. Even if there's very little, there's lampposts, barriers etc. If an airline pilot ever steers towards one they're really going for the least terrible option. Small planes fare better of course but again they won't have such tech for decades.



    This wouldn't detect overhead cables, which is the primary concern when using this to improve visual landing issues.


    Urban heat island analysis. The physical volumes of buildings is an essential input parameter into calculating the estimated impact of the built environment and possible interventions (e.g. greening, reducing traffic) against local temperature rises. It is notoriously difficult to obtain that data at fine spatial resolution. This would be a game changer. True to a lesser degree for air pollution modelling as well, building volume is a significant input for land use regression models.


    In a few recent bridge collapses and such I've seen they've used past satellite data to see how there were signs months or years in advanced.

    Was also some similar evidence regarding three gorges dam, and how it's not doing so great. Ie estimated height of surrounding area over time to indicate problematic movement, or something like that.



    Flood zone analysis.


    Warfare.


    This is probably the one that will pay the bills.

    If you can figure out fairly close-to-the-ground elevations, you can model a strike zone quite well.

    Good for special operations raids.

    But those folks might also have access to specialized NRO satellites, that can give you the data without the inference.



    can you explain this a bit more? i dont know a lot about this use case but it sounds pretty interesting


    I’m not sure what there is to explain.

    Seems pretty straightforward.



    The US has that but a lot other nations do not, and Ukraine's been buying up geospatial imagery all over just as fast as it can get it.


    I did a similar project as a toy many years ago: https://nbelakovski.github.io/topography_neural_net/

    In my case I just used it as a vehicle for learning about neural networks. I couldn't really think of a compelling use case. I wonder if the author of this article of the authors of the model have found one.



    Depth from ML was all the rage for a short bit, and I think most people filed it under "Things we can do with ML that we could already do better other ways". E.g., with a second image.

    Certainly it will find a niche use, but during that time the headlines in robotics papers were all about replacing traditional depth /range sensing with it, which doesn't seem plausible.



    It's being used a lot to turn regular videos/images into stereoscopic VR content. Mostly pornography.

    Nunif has tools to convert images/videos or even turn your desktop into a stereoscopic image and live-stream it to your VR headset over WiFi[1] and there's workflow nodes for ComfyUI[2].

    I tried the former and it reached conversation speeds of around 10FPS for full HD content on consumer hardware, so definitely usable. Still, I don't really see the point outside adding a gimmick to vacation photos or pornography. Don't think anyone would want to convert and consume a non-VR hollywood movie this way, but feel free to correct me on that.

    [1] https://github.com/nagadomi/nunif

    [2] https://github.com/kijai/ComfyUI-DepthAnythingV2 + https://github.com/MrSamSeen/ComfyUI_SSStereoscope



    There's enough porn in real VR180 format though. There's whole platforms for it like sexlikereal. And studios specialising in it with famous performers.

    Even for that I don't know why you'd want it artificial. Porn is all about being as real as possible.



    Does this mean that Google Earth may get 3D models for more world cities? As of today they only have 3D for a limited number of cities, mostly from wealthy countries. Also, would Microsoft Flight Simulator also get more accurate 3D cities? And what about terrain? Google Earth uses mostly guesses to gauge the height of rural terrain like mountains.






    Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



    Search:
    联系我们 contact @ memedata.com