中国科学院软件研究所机构知识库
Advanced  
ISCAS OpenIR  > 中科院软件所  > 中科院软件所
题名:
基于GPU的真实感图形实时绘制技术研究
作者: 刘保权
答辩日期: 2007-06-07
授予单位: 中国科学院软件研究所
授予地点: 软件研究所
学位: 博士
关键词: 实时绘制 ; 图形处理器( GPU ) ; 反射和折射 ; 焦散 ; 非线性光束跟踪
其他题名: Research on Real-Time Rendering Techniques on GPU
摘要: 目前,由于游戏和电影等娱乐产业的发展,计算机图形学的应用越来越深入和广泛。由于人们对于真实感图形绘制品质的不断追求,使得场景复杂度不断增加,场景细节也随之增多,从而对交互式绘制速度的压力也变的越来越大。尤其是在游戏等应用中,绘制速度的对用户的影响甚至比图像质量更为重要。但是如何实时地绘制这些高度复杂场景,以产生电影级别真实感的影像画面,目前仍然是图形学研究中的一个热点难题。经过多年的不懈努力,基于CPU的绘制算法虽然能够生成真实感很强的绘制品质,却仍然难以达到实时的绘制速度要求。针对这一课题,本文着重研究在图形处理器GPU(Graphics Processing Unit)上进行真实感图像的快速实时绘制技术。 本文算法的主要贡献和创新点在于如下几个方面: 1.提出了一种新的基于GPU的深度图像实时绘制流程。该方法利用GPU的并行计算特性,来对深度图像的绘制过程进行加速。推导出一种新的在vertex shader上进行的三维前向映射方法,对输入像素进行并行地前向映射,以提高绘制速度。并利用图形硬件流水线的光栅化功能来高效地进行图像的插值重构,以得到连续无洞的结果图像。在pixel shader上进行逐像素的光照计算,生成高品质的光照效果。实验证明:该方法可以高速地进行满屏绘制,准确地保留物体轮廓信息和正确的遮挡关系。还实现了基于该方法的实时漫游系统,该系统能实时地绘制包含很多个基于柱面深度图像表示的物体场景,并能对其进行视相关的动态LOD操作。 2.提出一种基于GPU的更准确的折射绘制算法,可直接绘制基于多边形网格表示的复杂物体的多次折射和全反射现象,还可对冰等透明或半透明物体进行实时模拟。不需要任何预计算,因而能交互式绘制动态可变形物体。该方法通过在正投影空间上进行光线与物体的求交运算,来跟踪光线穿过物体的准确路经(包括折射及全反射路径),可生成真实感很强的折射及全反射效果。 3.提出了一种基于GPU的焦散效果实时绘制算法。可直接绘制动态可变形透明物体在灯光作用下所产生的焦散效果。该方法通过在GPU上跟踪光子穿过透明物体发生两次折射最后打在背景物体上的路径,并根据能量守恒原则,计算光子在背景物体上的能量累积,以计算焦散效果的光亮度分布,从而产生真实感很强的焦散效果图,不需要进行任何后处理或帧间滤波。由于不需要进行任何预计算,所以透明物体和不透明背景物体均可进行任意的动态变形或相互运动,视点和光源也可以作任意的位置移动,而不影响绘制速度。 4.提出了一种基于GPU的非线性光束跟踪算法。光束跟踪是被用来绘制线性镜面反射效果(纯平镜面反射),长期以来一直不能处理非线性效果(如曲面镜面反射或折射,焦散,阴影等)。本文提出非线性光束跟踪来绘制这些广泛存在的非线性效果。由于现代图形硬件是被设计成用来进行线性顶点变换和三角形光栅化的流水线体系结构,所以要在GPU上进行非线性光束跟踪仍很困难。本文利用GPU的可编程性,设计并实现了一种非线性图形处理流水线,进行非线性光束跟踪,允许光束中的所有光线不必平行或交于一点。由于不依赖任何空间加速数据结构,本文方法支持完全动态场景的非线性全局光照效果的高品质实时绘制。
英文摘要: Due to the rapid advance of Computer Graphics in game and movie industry, peoples’ pursuing for highly realistic images becomes more and more fastidious, so the scene’ geometry becomes more and more complex with more and more details. As a result, it poses intensive pressure to the rendering speed. Especially in the fields such as games, the rendering speed becomes even more important than the rendering quality as required from the players. On the other hand, it is still a big challenge to render the complex scenes in real-time, and at the same time with realistic quality. Although CPU based conventional rendering algorithms (such as ray tracing and photon mapping) can produce realistic images, so far it is still impossible to achieve real-time rendering speed. Aiming at this challenge, this thesis focuses on the study on real-time rendering of realistic images on GPU. The contributions and novelties of the thesis are mainly in the following aspects: 1. We present a new pipeline for rendering depth images entirely on GPU. The implementation exploits inherent parallelism of GPU to speed up the rendering of depth images. By the scheme, a novel forward 3D warping equation in vertex shader is proposed to obtain high rendering performance. Furthermore, the hardware rasterization is utilized to do the image re-sampling efficiently, and generate the hole-free rendering results. Per pixel lighting effect is implemented in pixel shader to get high image quality. The rendering shows rapid performance at full screen resolution, with correct self-occlusions and accurate silhouettes. Moreover, we implemented a real-time walkthrough system for the objects based on cylindrical depth image rendered by view-dependent dynamic LOD representation at runtime. 2. A more accurate refraction rendering algorithm, which runs entirely on GPU, is presented in this paper. The algorithm can directly render complex objects represented by polygonal meshes without any pre-calculation, and allows the objects to be deformed dynamically through the interaction of users. The rendering method allows refraction or totally internal reflection (TIR) through two or more interfaces. Our method traces the accurate paths of both refraction and TIR rays going through the transparent object, and computes the ray-surface intersection in orthogonal projection space on GPU to produce realistic image with both refraction and TIR. We also real-time simulated the ice-like object’s translucency by adopting the scattering and absorbing model from volume rendering method during the process of the ray passing through the object. 3. A new technique is presented for interactive rendering of caustics fully processed on GPU. Without any pre-computation required, the algorithm can directly render refractive caustics from complex deformable transparent objects onto an opaque receiver surface. By the technique we accurately trace the path of the photons and calculate the energy carried by the photons emitted from the light source, and distribute the energy onto the receiver surface according to a Gauss basis function. As a result, photorealistic caustic image is rendered without the need of any post-processing or temporal filtering over recent frames. We demonstrate that the interactive caustics can be rendered by our method in real-time for non-uniform deformation of both refractive object and receiver surface. And at the same time, interactive change of light and camera in terms of position and direction could be made. 4. Beam tracing combines the flexibility of ray tracing and the speed of polygon rasterization. However, beam tracing so far only handles linear transformations, so it is only applicable to linear effects such as planar mirror reflections but not to nonlinear effects such as curved mirror reflection, refraction, caustics, and shadows. In this paper, we introduce nonlinear beam tracing to render these nonlinear secondary ray effects. Nonlinear beam tracing is highly challenging because commodity graphics hardware supports only linear vertex transformation and triangle rasterization. We overcome this difficulty by designing a nonlinear graphics pipeline and implementing it on top of a commodity GPU. This allows beams to be nonlinear where rays within the same beam do not have to be parallel or intersect at a single point. Using these nonlinear beams, real-time GPU applications can render secondary rays via polygon streaming similar to the way by which the primary rays are rendered. A major strength of this methodology is that it naturally supports fully dynamic scenes without the need to pre-store a scene database. Utilizing our approach, nonlinear global illumination effects can be rendered in real-time on a commodity GPU under a unified framework.
语种: 中文
内容类型: 学位论文
URI标识: http://ir.iscas.ac.cn/handle/311060/6610
Appears in Collections:中科院软件所

Files in This Item:
File Name/ File Size Content Type Version Access License
10001_200318015003116刘保权_paper.pdf(2649KB)----限制开放-- 联系获取全文

Recommended Citation:
刘保权. 基于GPU的真实感图形实时绘制技术研究[D]. 软件研究所. 中国科学院软件研究所. 2007-06-07.
Service
Recommend this item
Sava as my favorate item
Show this item's statistics
Export Endnote File
Google Scholar
Similar articles in Google Scholar
[刘保权]'s Articles
CSDL cross search
Similar articles in CSDL Cross Search
[刘保权]‘s Articles
Related Copyright Policies
Null
Social Bookmarking
Add to CiteULike Add to Connotea Add to Del.icio.us Add to Digg Add to Reddit
所有评论 (0)
暂无评论
 
评注功能仅针对注册用户开放,请您登录
您对该条目有什么异议,请填写以下表单,管理员会尽快联系您。
内 容:
Email:  *
单位:
验证码:   刷新
您在IR的使用过程中有什么好的想法或者建议可以反馈给我们。
标 题:
 *
内 容:
Email:  *
验证码:   刷新

Items in IR are protected by copyright, with all rights reserved, unless otherwise indicated.

 

 

Valid XHTML 1.0!
Copyright © 2007-2017  中国科学院软件研究所 - Feedback
Powered by CSpace