First disclosed this evening with teaser videos related to a GDC presentation on Unity, today AMD is announcing two developer-oriented features: real-time ray tracing support for the company's ProRender rendering engine, and Radeon GPU Profiler 1.2.
Though Microsoft’s DirectX Raytracing (DXR) API and NVIDIA’s DXR backend “RTX Technology” were announced today as well, the new ProRender functionality appears to be largely focused on game and graphical development as opposed to an initiative angled for real-time ray tracing in shipping games. Similarly, while Radeon GPU Profiler (RGP) has not received a major update since December 2017, as it is AMD’s low-level hardware-based debugging/tracing tool for Radeon GPUs this is likewise purely for developers.
In any case, for Radeon ProRender AMD is bringing support for mixing real time ray-tracing with traditional rasterization for greater computational speed. As with today's other real-time ray tracing announcements, AMD's focus is on capturing many of the photorealism benefits of ray tracing without the high computational costs. At a basic level this is achieved by limiting the use of ray tracing to where it's necessary, enough so that it can be done in real-time alongside a rasterizer. Unfortunately beyond a high-level overview, this is all AMD has revealed at this time. We're told a proper press release will be coming out tomorrow morning with further details.
As for the new version of RGP, 1.2 introduces interoperability with RenderDoc, a popular frame-capture based graphics debugging tool, as well as improved frame overview. The update also brings detailed barrier codes, relating to granular regulation of graphical work among DX12 units.
Regardless, AMD has yet more to say on the ray-tracing topic. Along with tomorrow's press release, AMD has a GDC talk scheduled for Wednesday on “Real-time ray-tracing techniques for integration into existing renderers,” presumably discussing ProRender in greater detail.
To many out there it may seem like DirectX 12 is still a brand-new technology – and in some ways it still is – but in fact we’ve now been talking about the graphics API for the better part of half a decade. Microsoft first announced the then-next generation graphics API to much fanfare back at GDC 2014, with the initial iteration shipping as part of Windows 10 a year later. For a multitude of reasons DirectX 12 adoption is still in its early days – software dev cycles are long and OS adoption cycles are longer still – but with their low-level graphics API firmly in place, Microsoft’s DirectX teams are already hard at work on the next generation of graphics technology. And now, as we can finally reveal, the future of DirectX is going to include a significant focus on raytracing.
This morning at GDC 2018 as part of a coordinated release with some of their hardware and software partners, Microsoft is announcing a major new feature addition to the DirectX 12 graphics API: DirectX Raytracing. Exactly what the name says on the tin, DirectX Raytracing will provide a standard API for hardware and software accelerated ray tracing under DirectX, allowing developers to tap into the rendering model for newer and more accurate graphics and effects.
In conjunction with Microsoft’s new DirectX Raytracing (DXR) API announcement, today NVIDIA is unveiling their RTX technology, providing ray tracing acceleration for Volta and later GPUs. Intended to enable real-time ray tracing for games and other applications, RTX is essentially NVIDIA's DXR backend implementation. For this NVIDIA is utilizing a mix of software and hardware – including new microarchitectural features – though the company is not disclosing further details. Alongside RTX, NVIDIA is also announcing their new GameWorks ray tracing tools, currently in early access to select development partners.
With NVIDIA working with Microsoft, RTX is fully supported by DXR, meaning that all RTX functionality is exposed through the API. And while only Volta and newer architectures have the specific hardware features required for hardware acceleration of DXR/RTX, DXR's compatibility mode means that a DirectCompute path will be available for non-Volta hardware. Beyond Microsoft, a number of developers and game engines are supporting RTX, with DXR and RTX tech demos at GDC 2018.
Taking a step back, though the goal of both DXR and RTX can be summed up as better graphics, ray tracing has been so computationally intensive that it has never been feasible for real time applications such as video games, instead featuring in offline rendering for movie graphics and similar. Where conventional raster-based rendering translates a 3D scene into a 2D image and applies various shaders and layers on top to emulate lighting effects, ray tracing essentially models imaginary ‘beams’ of light backwards from every pixel, and thus calculates all the associated bounces, refractions, and reflections. The end result is realistic and lifelike lighting, shadows, and reflections beyond what can be achieved with rasterization, but at the computational cost of replicating numerous rays regardless of scene complexity.
To that end, the idea of real time ray tracing in video games has long been tossed around, but the practical performance costs have always been an issue. At least until now, as this is what NVIDIA is addressing with their RTX technology and the underlying hardware.
Unfortunately, as earlier stated, not many technical details are being disclosed, making it difficult to piece together what appears to be a multi-layered technology. NVIDIA could only confirm that some indeterminate functionality in Volta does accelerate ray tracing, and that RTX is a mix of both hardware – NVIDIA also described Volta in a separate blogpost as having a "ray tracing engine" – along with various bits implemented in software running on a GPU's CUDA cores. Meanwhile NVIDIA also mentioned that they have the ability to leverage Volta's tensor cores in an indirect manner, accelerating ray tracing by doing AI denoising, where a neural network could be trained to reconstruct an image using fewer rays, a technology the company has shown off in the past at GTC. RTX itself was described as productizing certain hardware and software algorithms, but is distinct from DXR, the overlying general API.
Meanwhile for the new GameWorks Ray Tracing tools, NVIDIA referred to the aforementioned ray-tracing denoiser module in NVIDIA GameWorks, including ray tracing turnkey libraries for area shadows, glossy reflections, and ambient occlusion. As these libraries were built on top of the DXR API, NVIDIA noted that these tools were not necessarily limited to pre-Volta architectures.
On that note, since the entire “GPU – RTX – DXR – GameWorks Ray Tracing” stack only applies to Volta, the broader public is essentially limited to the Titan V, and NVIDIA likewise noted that RTX technology of present was primarily intended for developer use. For possible ray tracing acceleration on pre-Volta architectures, NVIDIA only referred back to DXR, though Microsoft has equally referred back to vendors for hardware-related technical details. And while strict performance numbers aren’t being disclosed, NVIDIA stated that real time ray tracing with RTX on Volta would be “integer multiples faster” than with DXR on older hardware.
While the actual release of games featuring DXR and/or RTX is up to the developers, NVIDIA outlined that they began engaging with developers some months ago resulting in GDC demos this week, and postulated that consumers could expect games shipping this year using real-time ray tracing with DXR and RTX. Curiously, though NVIDIA cites RTX support by developers and game engines, it remains the case that only Volta or newer would support new games with RTX.
The GameWorks SDK updates will be available this quarter, with ray-traced ambient occlusion available later this summer.
In a day filled with all sorts of game development-related API and framework news, Microsoft also has an AI-related announcement for the day. Parallel to today’s DirectX Raytracing announcement – but not strictly a DirectX technology – Microsoft is also announcing that they will be pursuing the use of machine learning in both game development and gameplay through their recently revealed Windows Machine Learning framework (WinML).
Announced earlier this month, WinML is a rather comprehensive runtime framework for neural networks on Windows 10. Utilizing the industry standard ONNX ML model format, WinML will be able to interface with pre-trained models from Caffe2, Tensorflow, Microsoft’s own CNTK, and other machine learning framework. In turn, the DirectML execution layer is able to run on top of DX12 GPUs as a compute task, using the supplied models for neural network inferencing.
The initial WinML announcement was a little ambiguous, and while today’s game-focused announcement has a more specific point to it, it’s still somewhat light on details. And this is mostly because Microsoft is still putting out feelers to get an idea of what developers would be interested in doing with machine learning functionality. We’re still in the early days of machine learning for more dedicated tasks, never mind game development and gameplay where this is all brand-new, so there aren’t tried-and-true use cases to point to.
On the development front, Microsoft is pitching WinML as a means to speed asset creation, letting machine learning models shoulder part of the workload rather than requiring an artist to develop an asset from start to end. Meanwhile on the gameplay front, the company is talking about the possibilities of using machine learning to develop better AIs for games, including AIs that learn from the player, or even just AIs that act more like humans. None of which is new to games – adaptive AIs have been around long before modern machine learning has – but it’s part of a broader effort to figure out what to do with this disruptive technology.
Though one interesting use case that Microsoft points out that does seem closer to making it to market is using machine learning for content-aware imagine upscaling. NVIDIA was showing this off last year at GTC as their super resolution technology, and while it’s ultimately a bit of a hack, it’s an impressive one. At the same time similar concepts are already used in games in the form of temporal reprojection, so if super resolution could be made to run in a reasonable period of time – no longer than around 2x the time it takes to generate a frame – then I could easily see a trend of developers rendering a game at sub-native resolutions and then upscaling it with super resolution, particularly to improve gaming performance at 4K. Or to work with Microsoft’s more conservative example, using such scaling methods to improve the quality of textures and other assets in real-time.
Moving on, while today’s announcement from Microsoft doesn’t introduce any further technologies, it does offer a bit more detail into the technological underpinnings of WinML. In particular, while the preview release of WinML is FP32 based, the final release will also support FP16 operations. The latter point being of some great importance as not only do recent GPUs implement fast FP16 modes, but NVIDIA’s recent Volta architecture went one step further and included dedicated tensor cores, which are meant to work with FP16 inputs.
Another notable development here is that while WinML generates HLSL code as a baseline mode, hardware vendors will also have the option of writing DirectML metacommands for WinML to use. The idea behind these highly optimized commands is that hardware vendors can write commands that take full advantage of their hardware – including dedicated hardware like tensor cores – in order to further speed up WinML model inferencing over what’s capable with a naïve HLSL program. Working with NVIDIA, Microsoft is already showing off an 8x performance improvement over baseline DirectML performance by using FP16 metacommands.
Ultimately as with today’s DirectX Raytracing announcement, Microsoft’s WinML gaming announcement is about priming developers to use the technology and to collect feedback on it ahead of its final release. More so than even DirectX Raytracing, this feels like a technology where no one is sure where it’s going to lead. So while DXR has a pretty straightforward path for adoption, it will be interesting to see what developers do with machine learning given that they’re largely starting with a blank slate. To that end, Microsoft has a couple of WinML-related presentations scheduled for this week at GDC, which hopefully should shed a bit more light on developer interest.
AMD and some of its retail partners have started a new discount campaign involving AMD Ryzen and AMD Ryzen Threadripper processors. Select AMD CPUs will be available at reduced prices when bought from participating retailers till the end of March.
The new campaign involves two high-end Ryzen Threadripper 1950X and 1920X, all three Ryzen 7 (1700, 1700X, 1800X) models, three Ryzen 5 (1400, 1500X, 1600X) SKUs, and two Ryzen 3 (1200, 1300X) variants. In the U.S., four major retailers participate in AMD’s new promo sale: Amazon, Newegg, Micro Center, and Fry’s. Amazon UK and Amazon France also sell select AMD processors at reduced prices, but it is unclear whether AMD’s campaign is global, or only covers the U.S., Canada, UK, and France.
Exact discounts vary depending on the particular product. For example, the Ryzen Threadripper 1950X is available for $869, which is 13% off its $999 MSRP. Meanwhile, the Ryzen 7 1800X only got a 6% discount and is now available for $329 from Amazon. The Ryzen 3 1200 now is available for $94, the first time when a Ryzen-branded CPU is available for less than $100 in retail. See the table below for exact details and “buy” links.
Earlier this year AMD already slashed official prices of its Ryzen processors in order to better compete against Intel products. That price-cut was global and had an effect on all Ryzen SKUs, but only on one Threadripper model. By contrast, this time select retailers offer discounts on select Ryzen and two higher-end Ryzen Threadripper CPUs, so evidently AMD is trying to address the higher-end of the market with its discounts.
|AMD Ryzen Pricing with Campaign Discounts|
|Processor||Cores/Threads||Current SEP||Campaign Price|
|Ryzen TR 1950X (TR4)||16C/32T||$999||$869|
|Ryzen TR 1920X (TR4)||12C/24T||$799||$669|
|Ryzen TR 1900X (TR4)||8C/16T||$449||-|
|Ryzen 7 1800X (AM4)||8C16T||$349||$329|
|Ryzen 7 1700X (AM4)||8C/16T||$309||$289|
|Ryzen 7 1700 (AM4)||8C/16T||$299||$275|
|Ryzen 5 1600X (AM4)||6C/12T||$219||$198|
|Ryzen 5 1600 (AM4)||6C/12T||$189||-|
|Ryzen 5 1500X (AM4)||4C/8T||$174||$169|
|Ryzen 5 1400 (AM4)||4C/8T||$169||$150|
|Ryzen 5 2400G (AM4)||4C/8T||$169||-|
|Ryzen 3 2200G (AM4)||4C/4T||$99||-|
|Ryzen 3 1300X (AM4)||4C/4T||$129||$115|
|Ryzen 3 1200 (AM4)||4C/4T||$109||$94|