TTLG|Thief|Bioshock|System Shock|Deus Ex|Mobile
Page 2 of 2 FirstFirst 12
Results 26 to 38 of 38

Thread: Graphics card advice

  1. #26
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    Quote Originally Posted by Sulphur View Post
    Vulkan's based on Mantle, which AMD created.
    Like i said - i doubt it gave them any advantage over a month or two. Now, years (well, barely), later it makes no difference. Mantle was made because all the alternatives (OGL, DX) were terrible (driver overhead / inability for an app to say what it wants to do) at the time and AMD smelled an opportunity which they took (especially as AMD implementation for thous alternatives was really-really terrible [less so for DX for reasons already mentioned] in comparison to Nvidia). Mantle removed that mess - and Vulkan / DX12 did the same. BECAUSE they can. Because the hardware is more-or-less the same and the wast majority of the abstractions are just pointless hindrances for everyone (Unification of sorts. A process i remember people noting ~10+ years ago and it never stopped).

    A thin and direct driver layer can not have favorites. AMD does like to use Mantle in its Vulkan ads - because the average Tom does not understand any of it anyway. It is meaningless.

    Nvidia is not in the business of paying favors to competitors - quite the opposite, they are the biggest assholes around. Vulkan is not Mantle either - it is an common hardware abstraction which most notably contains core (for Vulkan) stuff for tile based devices (phones etc). Something that is completely absent from Mantle.

    Quote Originally Posted by Sulphur View Post
    Anyway, take a look see for yourself: https://www.tomshardware.com/reviews...64,5173-8.html
    Could not find anything even remotely relevant there :/. I am not terribly familiar with that mammoth-of-a-site. Maybe i do not know where to look.

    All i found was this quote: "Although Nvidia’s performance under Vulkan is much improved, AMD continues to dominate in Doom.".

    The first part of that said absolutely nothing about Vulkan implementation quality between the two - to their credit it does not pretend to. It is just a statement of fact and exactly matches what one would expect to be the case. Even so, i could not find what the statement is based on (i guess it directly addresses the improvements from switching to Vulkan).

    The second part i have no idea what it is trying to say or what it is based on (or it is just too stupid for me to easily accept the idiocy it seems to portray). Could not find anything on the site to clarify what it is supposed to mean.

    ----------------
    A few words about GPU performance.

    It is a function of:
    * GPU capabilities (inc. processing power)
    * Driver capabilities and overhead (inc. host caps/etc ... which is not relevant here, so i omit them from now on)
    * User overhead.

    Lets call them G, D and U for short and give them weights. A few illustrative out-of-my-ass-but-representible numbers:

    Ancient OGL: no-one cares.

    Older OGL: G:5 D:21 U:3 (also known as dentistry using the anal approach)

    Basically the driver has to literately reverse engineer on the fly what actually needs to be done and predict it ahead of time. Common side effect is that GPU is bored out of its mind as the driver/user cannot feed it enough work and it is just idling around. In the past it did not matter as the GPU was too slow to keep up anyway.

    Newer OGL has improved a lot (if you use only the right stuff): G:13 D:9 U:2
    AMD: G:12 D:12 U:2
    Nvidia: G:13 D:8 U:2

    Improved, but not completely fixed. But if you tread carefully then you can feed the GPU reasonably well most of the time while sacrificing noticeable amount of CPU time (assuming you can afford to do so).

    Vulkan/Mantle/DX12 remove a lot of the crap: G:15 D:2 U:1
    AMD, Nvidia: essentially the same (AMD has more restrictions, gradually getting rid of them would help i guess, but not much room for improvement either way. Iirc AMD had pretty poor parallelization for Vulkan [interleaving GPU work] ... don't remember. Was not relevant for me at the time).

    If your app was not driver/user overhead bound then none of them will be of any help at all.

    ----------
    Finding out the quality of Vulkan/DX12 implementation is rather difficult, as what you want is to extract the weight of D from G+D+U when D is very low (and scale thous by the effects of pipeline bubbles). Much easier to do that for older OGL as D weight is rather huge and often to the point that you can directly measure idle time of G (which should and usually is 0 or thereabout on newer APIs)

    It is hard for Nvidia to improve in doom (OGL vs Vulkan) as their OGL was not shit to begin with.
    It is easy for AMD to improve in doom as their OGL was and is shit in comparison.

    Ie. switching to Vulkan will expectedly show small gains for Nvidia and big gains for AMD. Also: AMD gains will naturally rise faster than Nvidia as new GPUs come out and OGL overhead affects AMD more.

    This is the core of the common misunderstanding (which seems to always involve Doom).

    Quote Originally Posted by Sulphur View Post
    ... but last time they disappeared up their own real-time raytracing butthole - it was called Larrabee ...
    Oh, right. Completely forgot that. That whole project was just perplexing.

    I hope intel gets something useful done this time.

  2. #27
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Not talking about API quality or ease of implementation, zombe. Just raw hardware performance and benchmarks compared on a per-GPU, per-API level. Analysis of driver-level implementation is not something a hardware site would normally do, and I'm unaware of a site that actually does that, though I'd love to read up on it.

  3. #28
    Member
    Registered: Apr 2001
    Location: Switzerland
    Complete un-technical question concerning graphics cards: I was thinking that when the new Nvidia cards come out I'd wait half a year or so and then get one. However, with the whole crypto mining thing, how much of a risk is there that cards will be more or less unavailable at that time or the ones that are available cost considerably more than they did at release?

  4. #29
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Well, there's no easy answer to that one (that I can see). While mining's taken a tumble in recent times, and folks seem to have finally grokked exactly how volatile/speculative it is, the very nature of its open, decentralised philosophy means people are free to keep coming up with ASIC-resistant variants. nvidia has addressed the problem a little bit with dedicated mining cards and being the only place you can be certain sells cards at MRP, but that's not a guarantee there won't be a constant tide of average joes baited by the crypto bubbles. The day nations decide strong regulation for CC is the day you'd see a massive drop in people wanting in, but when that's coming is uncertain.

    I'd say if you plan on waiting for an upgrade, see how the pickup is when the new cards arrive. If stuff gets sold out within a few weeks (and it probably will if there's higher efficiency and lower TDP with new parts, so people would want to add on to/refit their mining rigs) or lesser, it's likely price drops will be difficult to come by in the following months.

    I don't expect nvidia or AMD to combat mining in any actual fashion because they've been reaping the rewards of it so far. If you're lucky, supply and demand will get back to normal and you can pick up third party cards at a discount eventually. But failing that, you should at least be able get a vanilla card at MRP from your local nvidia site when they're in stock.

  5. #30
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    Ah, the sweet-sweet confusion.

    Is it fair to say that what you meant to convey was something like: If one has a choice then for AMD, the new APIs should be used as their implementation for earlier APIs is markedly really bad if compared to Nvidia which pays a significantly lesser penalty for earlier APIs. In other words, if you have a choice between equally powerful AMD and Nvidia hardware then pick Nvidia as it is either demonstrably better or just equal (the unlikely worst case scenario).

    That would match this:

    Quote Originally Posted by Sulphur View Post
    AMD's generally better at DX12/Vulkan but not so great at DX11, while nvidia is a bit slower with those newer APIs but outpaces AMD in DX11.
    My suggested version above and yours quoted here are compatible. However, they don't look like it - hence why i though you meant specifically newer API implementation quality.

    Something i would like to see myself too. Especially for Vulkan (my turf) - there has been a lot of wiggling around both AMD and Nvidia camps around it.

    Basically, the early responses from Khronos/AMD/Nvidia to questions in the form of "should i do A or B" was: "which one would you like to prefer?" - to some degree it still holds. Ie. there is enough wiggle-room for usage to dictate what should be done. For example: is it worth to interleave work with Queues (nvidia has 16, AMD and Intel have a max of 1 across all hardware - ie. cannot do anyway) or enough to depend on interleaving work on same queue (only option for AMD and Intel). What difference does it make? How well can they interleave with present drivers?

    Totally unrelated, but relevant to what i said earlier. I noticed that there are now AMD (*) cards that have their transfer queue limitations lifted (i suspect that the possibility for the limitation was added to the Vulkan specification ONLY (**) for AMD as i have never seen ANY other GPU out of 3500 listed with any limitations at all - not even any of the mobiles need it).

    *) http://vulkan.gpuinfo.org/displayrep...#queuefamilies "AMD cape verde" - f* finally has a usable transfer queue.
    **) not the first time something gets added to spec to accommodate AMD junk - i went to Nvidia because of a stunt like that when OGL for AMD/ATI got a max of 4 texture indirections for even their latest and greatest cards when everyone else said: use as many as you want and can cram into max instruction count (they literally just used the same number for the limit - because, well, you have to write something there and the concept is meaningless for everyone else, so write whatever there).

    edit: erm, nevermind. AMD still has the transfer limitations even for "Radeon RX Vega". The "AMD cape verde" was a LLVM version for Gentoo Linux - that hardly counts. Back to "normal" i guess.
    Last edited by zombe; 18th Jun 2018 at 05:42.

  6. #31
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Quote Originally Posted by zombe View Post
    Ah, the sweet-sweet confusion.

    Is it fair to say that what you meant to convey was something like: If one has a choice then for AMD, the new APIs should be used as their implementation for earlier APIs is markedly really bad if compared to Nvidia which pays a significantly lesser penalty for earlier APIs. In other words, if you have a choice between equally powerful AMD and Nvidia hardware then pick Nvidia as it is either demonstrably better or just equal (the unlikely worst case scenario).
    Indeed. However, you've added detail to it by breaking down the components of overall performance, so that's been helpful to anyone reading, I'm sure. It's good to hear from someone who has their hands in the metaphorical guts of the system. Very enlightening.

  7. #32
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    In case i have scared anyone away from AMD, let me add a bit more of my enthusiastic blabbering.

    First, it is easier if we say that Intel does not exist - because it essentially does not. Ok, that out of the way ... here goes ...

    Story time. A few weeks ago i was in search for a suitable texture format for this-or-other. Easily found one i was happy with and proceeded to check its support and found that considerable percentage of cards do not support it. Probably mobiles, but let's check. So, made a query to list all devices that do not support it ... every single one of them AMD. That was in fact the entire AMD lineup - past and present. So, i did not use that format.

    And that is the point. Games are made for lowest common denominator hardware foremost - that is AMD (although their market percentage is worryingly low, but still big enough). In other words, for a consumer it does not matter much that internally AMD is a bit crappy. They will usually never see it. What used to be AMD strategy was to make cards cheaply but with equal computation power compared to Nvidia by doing the minimum needed for DX camp and nothing else. That did not really work, but not because the strategy was not solid. I think it was because their drivers were terrible and never went anywhere which nullified their cheap hardware advantage. A death spiral ensued with dire predictions for the future. Mantle to the rescue!

    Never underestimate a cornered dog. While Mantle did not really fly - it pawed the way forwards for everyone (ie. Oi! Guys, our hardware is fairly similar now - is it not time to make a new API without the bollocks abstraction shitfest?). Now we have Vulkan (DX12 too). - which benefits everyone.
    * Intel was completely hopeless at every API before - vulkan it can do (sadly, integrated stuff is fairly useless - lets hope for a discrete GPU from them. They now have a chance where there was none before).
    * AMD had persistently problems with earlier APIs - vulkan it can do.
    * Nvidia, while shitting bricks, could manage earlier APIs fine - but vulkan is just better.

    As long as pre Vulkan/DX12 stuff dies out - AMD has a strong comeback opportunity. Don't dismiss AMD outright.

  8. #33
    Member
    Registered: Sep 2001
    Location: Qantas
    For me, the decision making process is simple. I look at benchmarks of the games I want to play and identify the AMD and NVidia cards that provide acceptable frame rates on those games. If the acceptable AMD and NVidia alternatives are similar in price, energy consumption, and noise, I will buy AMD simply to bolster the competition. We will all be worse off if NVidia puts AMD out of the gaming GPU business. AMD used to offer better price/performance before all this crypto-mining nonsense took off, but these days their price/performance is near parity or even advantage to NVidia in some cases. One thing I don't like about NVidia is G-Sync and the associated markup. My monitor supports FreeSync, but the minimum refresh rate it will sync to is 52 Hz, so it's not a big selling point. For me, right now, the higher energy consumption and thermal output of the high end AMD cards tips me toward the NVidia side.

  9. #34
    Zombified
    Registered: Sep 2004
    my process is also simple - buy whatever has a 6pin power connector and the best price/performance ratio at the given moment, and is NOT nvidia because f*ck them for delivering the final blow to 3dfx. so yeah, Ati/AMD.

  10. #35
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Almost completely tangential, but more news on GPU availability: https://seekingalpha.com/article/418...entory-problem.

    Whether that's good or bad news is... well, it's not really open to interpretation, this is kinda stupid if they really do have an inventory surplus that's preventing them from announcing the next generation. However, as the writer discloses, they're shorting nvidia stock, so take the subjective bits with a few shakes of salt.

  11. #36
    Member
    Registered: Sep 2001
    Location: Qantas
    This sounds like good news to me. There may be pent-up demand among gamers, but prices are still somewhat inflated. GTX 1060, 1070, and 1080 cards are all selling for more now than they did at introduction in 2016. Even MSRP on those cards is higher now than it was in 2016. NVidia is advertising 1080 Ti cards for $699 but they are perpetually out of stock. All the other decent 1080 Ti cards are in the $800-900 range. If there is a surplus of GPUs and NVidia is aggressively buying memory, I presume this means their own boards will become consistently available again which will put some downward price pressure on their partners.

  12. #37
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    It's really not that great, because let's say you pull the trigger on getting one of those parts now. They're old, having been in the market for 2 years now, which is somewhat unprecedented given the normal cadence of releases for GPUs. It'd be a bit tragic if you get one of them now and in a few months' time there's a bunch of new parts released that effectively replace the 10xx line with better-performing SKUs at competitive prices - given the earlier rumours, we're likely to see something new in the coming months at least. But it's not fun, this game, and the uncertainty of when nvidia/AMD are releasing the next generation of parts hangs over the enterprise like the sword of Damocles.

  13. #38
    Zombified
    Registered: Sep 2004
    on a slightly related note, trying to find a good graphics card bargain on ebay might not be such a good idea;


Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •