(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43654912

Hacker News的一个帖子讨论了gpuopen.com上的一篇文章,该文章介绍了使用重心坐标进行四边形双线性插值的方法。该方法旨在改进纹理和变形效果,优于目前由于硬件加速和平面表面特性而被广泛使用的三角形。 一些评论者对这种方法的实用性进行了辩论。一些人认为,现有的方法,例如将变形烘焙到纹理中,已经足够解决了三角形网格的纹理问题。另一些人指出,现代GPU针对三角形光栅化进行了优化,因此四边形的性能较差。 然而,也有一些人看到了这种方法的潜力,尤其是在低多边形艺术风格或不需要细分的情况下。一位评论者认为,关键的创新在于网格着色器能够将四边形定义传递给GPU进行计算。性能问题也受到了关注,特别是对于需要使用几何着色器(众所周知速度较慢)的旧版GPU而言。总的来说,讨论比较细致,既承认这种方法的潜在好处,也承认用四边形取代三角形作为主导图元所面临的实际挑战。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
Bilinear interpolation on a quadrilateral using Barycentric coordinates (gpuopen.com)
101 points by mariuz 7 hours ago | hide | past | favorite | 27 comments










This is one of those things that feels like a broken/half-assed/oversimplified implementation got completely proliferated into the world a long time ago and it took several years for the right person to do a full-depth mathematical analysis to reveal what we should've been doing all along. Similar to antialiasing and sharpening, texture filtering, color spaces and gamma correction, etc.

It reminded me of this article specifically: https://bgolus.medium.com/the-best-darn-grid-shader-yet-727f...



For someone who wrote textured triangles on a 386:

First rule of computer graphics: lie

Second rule of computer graphics: lie

Third rule of computer graphics: lie



The fact that triangles have proliferated is not due to half-assery. Hardware can rasterize them very quickly, and a triangle can have only one normal vector. Quads can be non-planar. It's true that quads are nice for humans and artists though!

As an aside, Catmull-Clark subdivision has been around since 1978, which, as a first step, breaks an arbitrary polyhedron into a mesh of quadrilaterals.



It's not so much that triangles are the primitive, as much as our logic for combining multiple triangles into a mesh and texturing, lighting, and deforming them in continuous ways clearly has some gaps. It's definitely not an easy problem and it's a fun exercise to see how various silicon innovations unlocked increasingly accurate solutions, and what corners needed to be cut to hit 30fps back in the day.


Yeah, I don't think triangles will go away anytime soon. And, sometimes they're even preferred in certain cases by artists (think creases on jeans).


It's quite astonishing how complicated it is to draw lines in 3D graphics. As a novice it was a little unbelievable that the primitives for drawing lines was effectively limited to a solid screen-space pixel wide line. Want to draw a 2 pixel wide line? Do it yourself with triangles.


Ironically, back in the OpenGL 2.0 days, it was a lot easier to do things like this.


Well, technically the API is still available pretty much everywhere (be it directly or via a wrapper library) and most hardware still has support for drawing lines, so it is still easy in current days to do things like this too :-P.

I'm using it all the time when i want to draw lines in 3D.

(though as far as lines and OpenGL is concerned i remember reading ages ago that not even SGI's implementation had full support for everything OpenGL was supposedly able to do)



The API still exists, but in most implementations things like line styles and thickness are no longer supported.


It in no way replaces triangles, and very few will use it for good reason.

Why?

In many cases modern renders use triangles only a few pixels in size, you won't see C1 discontinuity at that size.

All the outer edges of quad still have C1 discontinuity between other quads, all it fixes is the internal diagonal

It has performance & complexity overhead



For most workflows this is a non-issue. When texturing a triangle mesh, the distortions are baked into the texture map, so no seams are visible at the quad diagonals.


This seems to happen really often! I think I remember there was another one being about color blending being done on the wrong gamma space on GPUs?


To answer some of the questions here, the reason this has not been used before is because this technique requires being able to access the quad definitions (ie. which 4 vertices makeup each quad) within the gpu.

Up until recently with Mesh Shaders, there's really just been no good way to send this data to the GPU and read back the barycentric coordinates you need in the fragment shader for each pixel.

The article offers several options, to support older GPUs, like Geometry Shaders and Tesselation shaders. This is good, but these are really at best Terrible Hacks(tm). Proof of the ability to contort old extensions is not proof of reasonable performance!

Notably, geometry shaders are notorious for bad performance, so the fact that they list them as a viable strategy for older devices makes it pretty clear they aren't thinking much about performance, just possible compatibility.

Still, I think this is very cool, and now that GPUs are becoming much more of a generic computing device with the ability to execute arbitrary code on random buffers, I think we are nearly at the point of being able to break from the triangle and fix this! We hit this triangulation issue several times on the last project, and it's a real pain.



I am definitely not an expert in 3D graphics... but this looks such an astonishingly simple and effective method, it makes me to question why this wasn't already thought of and picked up?

I get that with fixed-pipeline GPUs you do what the hardware and driver make you do, but with the advent of programmable pipelines, you'd though improving stuff like this would be the first things people do?

Anyway, gotta run and implement this in my toy Metal renderer.



You want triangles in general, they behave way better, think for example computing intersections.

Also, debugging colors on a 3D surface is not an easy task (debugging in general in 3D is not easy). So if the rendering is nice and seems correct, you tend to think it is.

And if it was not, and you didn't encounter something that bothers you, it doesn't matter that much, after all, what is important is the final rendering style, not that everything is perfectly accurate.



Because there is no reason to not use triangles.

Look at prideout's reply in the thread, the argument about having just one normal vector and the fact they can only describe one plane is huge. Unless you want more edge cases to deal with (hehe, pun intended), you're better off sticking to tris.



I know about the advantages and uniqueness properties of triangles. However, if the article is correct about that artists prefer using quads when editing (I know absolutely nothing about 3D editing, and didn't know that, I thought triangles are universal these days), something is clearly missing from the pipeline if a neatly mapping textures to quads, then converted to triangles ends up messing the interpolation.

Maybe we can continue to describe the geometry in triangles, but could use an additional "virtual fourth handle point" data (maybe expressed in barycentric coords of the other three, so everything is automatically planar) for stretching the interpolation in the expected way.

Anyway, I'm just getting started with Metal, and this provided for a nice theme for experimentation.



Modern GPUs - and really GPUs for about 10 years - are stupidly fast when it comes to rasterizing triangles, so artists simply work with quads (and polygons of more than four vertices - when Blender added "n-gons" back in the day many artists rejoiced) and they just subdivide them if they want neater looking interpolation before triangulating them. But a lot of detail nowadays come from geometry anyway.

This could have been nice ~15 years ago when using much fewer poly counts and relying a lot more on textures for detail was more common. But at the same time, GPUs were also much slower so this approach would be slower too.

So in practice this is really a niche approach for when you want to have a large quad that isn't "perfectly" mapped to an underlying texture and you can't subdivide the quad. Nowadays such use cases are more likely to come from more procedural sides (or perhaps effects) of the engine than artists making assets for the game.

It might be useful for an artist wanting to use a modern engine running on a modern PC to make a low-poly retro-styled game though.



Interesting - low-poly (and 2D) are exactly where my interests are.

Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.



Because when working with a textured asset, these seam distortions simply don't occur. The inverse of the distortion is baked into the texture map of the asset. So the distortion between a triangle's world-space size vs. its texture-space size cancels out exactly, and everything looks correct.


Okay, so the same idea that I spitballed in a sibling thread:

> Btw. wouldn't it be possible in modern pipelines to remap or "disfigure" the texture when converting to triangles, so that it counters the bias accordingly? Ah, but that bakes in the shape of the quad, so it can't be modified runtime or it will get distorted again, right.

How does that work with animated meshes?



If you animate the mesh in a non-affine way, then seams are inevitable. The quad-rendering technique described in the article wouldn't save you from that either: seams would appear where two quads meet.


This actually seems quite easy to implement. Any thoughts on the performance hit a program takes when going this route instead of using one of the workarounds?


Very interesting! This reminds me of how stumped I was learning about UV unwrapping for texturing. Even simple models are difficult to unwrap into easily editable textures. "Why can't I just draw on the model?"

Blender has a few plugins these days that make it a lot easier --- one that impressed me was Mio3 UV: https://extensions.blender.org/add-ons/mio3-uv/



Is this really new? Will it become an option in Unity, Unreal and the like? The results seem convincing!


(deleted)


I think you can if you align the deflector with the tetryon field and feed all power from the warp core directly into the Heisenberg compensators.






Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com