msbhvn wrote: mackered wrote:
I'm sorry, but greenbergs comments are irrelevant to this article. If you read the original material that THIS article is talking about it is exactly about pixel quality. Your comments refer to a previous article about Ryse which is nothing to do with this article. Read the source material then come back and comment. Here is the link:http://www.eurogamer.net/articles/digit ... developers
The article pretty much is about pixel quality.
Ryse is mentioned in the first paragraph of this article! My comments have been about this
article not the Eurogamer one.
I missed the part where the tech stuff was attributed to Andrew Goossen, so when I mentioned Greenberg earlier, I meant to say Goossen.
The term "per-pixel quality" refers to rendering at the display resolution (1080p is the max for TV currently), where each pixel you see on screen is rendered by the game engine. Some games on the Xbox One render at a lower resolution and upscale, meaning not all of the pixels on the screen are rendered by the game engine. Crytek say they're doing this for stylistic reasons rather than because the Xbox One GPU isn't powerful enough, and Goossen is backing them up (though he doesn't mention them specifically), saying that with the architecture of the One, matching the PS4's specs wouldn't be of any benefit, whereas the extra clock speed the driver update gave the GPU is.
And FYI the 32bpp they are talking about in the Eurogamer article refers more to texture quality (and fill rate, which is dependent on texture quality) than pixel quality as you're using the term, they are two very different things. Again, virtually no game made in the last fifteen years is designed to display in less than 24-bit colour, making half of your original post completely irrelevant. When sub 24-bit textures are used in a true colour game engine (like in a lot of 360 games for performance reasons), lighting effects will mitigate the loss of fidelity in the textures.
I'm sorry, maybe I did not make myself clear. I am not talking about resolution, I am talking about the quality of each pixel displayed on the screen, the quality of each pixel. To display 1080p requires a number of pixels across the screen and a number of pixels down the screen, that is NOT what the article in question is talking about. It is talking about what is IN each pixel.
The more info IN each pixel, the more GPU power and bandwidth it requires to deal with it. In the article, Microsoft admit that their tests were skewed to give advantage to less GPU power and more cache AND that the machine will bottleneck if they tried to do both 1080p and higher quality.
At the end of the day it is a business decision. Microsoft chose to put cheaper main memory and less GPU power than Sony and compensated by over clocking and onboard cache. For a long life cycle product, I would have thought they might have taken a small hit at the front and make more profit in the back. In terms of sheer hardware power, Sony have won the argument.
However, gaming is not only about the hardware, it is also the games themselves and the environment in which we game an for me, Microsoft have won that argument. This, however, does not stop me from being a little disappointed in what Microsoft has done. In my humble opinion, they could (and should) have put better GPU and main memory in the box for the expected life cycle of the product. And to call something that bottlenecks at 1080p 4k gaming capable is frankly laughable.
If you're not supposed to put your hands in your pockets, why do they put them right where your hands hang?