TTLG|Thief|Bioshock|System Shock|Deus Ex|Mobile
Page 2 of 2 FirstFirst 12
Results 26 to 30 of 30

Thread: Graphics card advice

  1. #26
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    Quote Originally Posted by Sulphur View Post
    Vulkan's based on Mantle, which AMD created.
    Like i said - i doubt it gave them any advantage over a month or two. Now, years (well, barely), later it makes no difference. Mantle was made because all the alternatives (OGL, DX) were terrible (driver overhead / inability for an app to say what it wants to do) at the time and AMD smelled an opportunity which they took (especially as AMD implementation for thous alternatives was really-really terrible [less so for DX for reasons already mentioned] in comparison to Nvidia). Mantle removed that mess - and Vulkan / DX12 did the same. BECAUSE they can. Because the hardware is more-or-less the same and the wast majority of the abstractions are just pointless hindrances for everyone (Unification of sorts. A process i remember people noting ~10+ years ago and it never stopped).

    A thin and direct driver layer can not have favorites. AMD does like to use Mantle in its Vulkan ads - because the average Tom does not understand any of it anyway. It is meaningless.

    Nvidia is not in the business of paying favors to competitors - quite the opposite, they are the biggest assholes around. Vulkan is not Mantle either - it is an common hardware abstraction which most notably contains core (for Vulkan) stuff for tile based devices (phones etc). Something that is completely absent from Mantle.

    Quote Originally Posted by Sulphur View Post
    Anyway, take a look see for yourself: https://www.tomshardware.com/reviews...64,5173-8.html
    Could not find anything even remotely relevant there :/. I am not terribly familiar with that mammoth-of-a-site. Maybe i do not know where to look.

    All i found was this quote: "Although Nvidia’s performance under Vulkan is much improved, AMD continues to dominate in Doom.".

    The first part of that said absolutely nothing about Vulkan implementation quality between the two - to their credit it does not pretend to. It is just a statement of fact and exactly matches what one would expect to be the case. Even so, i could not find what the statement is based on (i guess it directly addresses the improvements from switching to Vulkan).

    The second part i have no idea what it is trying to say or what it is based on (or it is just too stupid for me to easily accept the idiocy it seems to portray). Could not find anything on the site to clarify what it is supposed to mean.

    ----------------
    A few words about GPU performance.

    It is a function of:
    * GPU capabilities (inc. processing power)
    * Driver capabilities and overhead (inc. host caps/etc ... which is not relevant here, so i omit them from now on)
    * User overhead.

    Lets call them G, D and U for short and give them weights. A few illustrative out-of-my-ass-but-representible numbers:

    Ancient OGL: no-one cares.

    Older OGL: G:5 D:21 U:3 (also known as dentistry using the anal approach)

    Basically the driver has to literately reverse engineer on the fly what actually needs to be done and predict it ahead of time. Common side effect is that GPU is bored out of its mind as the driver/user cannot feed it enough work and it is just idling around. In the past it did not matter as the GPU was too slow to keep up anyway.

    Newer OGL has improved a lot (if you use only the right stuff): G:13 D:9 U:2
    AMD: G:12 D:12 U:2
    Nvidia: G:13 D:8 U:2

    Improved, but not completely fixed. But if you tread carefully then you can feed the GPU reasonably well most of the time while sacrificing noticeable amount of CPU time (assuming you can afford to do so).

    Vulkan/Mantle/DX12 remove a lot of the crap: G:15 D:2 U:1
    AMD, Nvidia: essentially the same (AMD has more restrictions, gradually getting rid of them would help i guess, but not much room for improvement either way. Iirc AMD had pretty poor parallelization for Vulkan [interleaving GPU work] ... don't remember. Was not relevant for me at the time).

    If your app was not driver/user overhead bound then none of them will be of any help at all.

    ----------
    Finding out the quality of Vulkan/DX12 implementation is rather difficult, as what you want is to extract the weight of D from G+D+U when D is very low (and scale thous by the effects of pipeline bubbles). Much easier to do that for older OGL as D weight is rather huge and often to the point that you can directly measure idle time of G (which should and usually is 0 or thereabout on newer APIs)

    It is hard for Nvidia to improve in doom (OGL vs Vulkan) as their OGL was not shit to begin with.
    It is easy for AMD to improve in doom as their OGL was and is shit in comparison.

    Ie. switching to Vulkan will expectedly show small gains for Nvidia and big gains for AMD. Also: AMD gains will naturally rise faster than Nvidia as new GPUs come out and OGL overhead affects AMD more.

    This is the core of the common misunderstanding (which seems to always involve Doom).

    Quote Originally Posted by Sulphur View Post
    ... but last time they disappeared up their own real-time raytracing butthole - it was called Larrabee ...
    Oh, right. Completely forgot that. That whole project was just perplexing.

    I hope intel gets something useful done this time.

  2. #27
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Not talking about API quality or ease of implementation, zombe. Just raw hardware performance and benchmarks compared on a per-GPU, per-API level. Analysis of driver-level implementation is not something a hardware site would normally do, and I'm unaware of a site that actually does that, though I'd love to read up on it.

  3. #28
    Member
    Registered: Apr 2001
    Location: Switzerland
    Complete un-technical question concerning graphics cards: I was thinking that when the new Nvidia cards come out I'd wait half a year or so and then get one. However, with the whole crypto mining thing, how much of a risk is there that cards will be more or less unavailable at that time or the ones that are available cost considerably more than they did at release?

  4. #29
    Chakat sex pillow
    Registered: Sep 2006
    Location: not here
    Well, there's no easy answer to that one (that I can see). While mining's taken a tumble in recent times, and folks seem to have finally grokked exactly how volatile/speculative it is, the very nature of its open, decentralised philosophy means people are free to keep coming up with ASIC-resistant variants. nvidia has addressed the problem a little bit with dedicated mining cards and being the only place you can be certain sells cards at MRP, but that's not a guarantee there won't be a constant tide of average joes baited by the crypto bubbles. The day nations decide strong regulation for CC is the day you'd see a massive drop in people wanting in, but when that's coming is uncertain.

    I'd say if you plan on waiting for an upgrade, see how the pickup is when the new cards arrive. If stuff gets sold out within a few weeks (and it probably will if there's higher efficiency and lower TDP with new parts, so people would want to add on to/refit their mining rigs) or lesser, it's likely price drops will be difficult to come by in the following months.

    I don't expect nvidia or AMD to combat mining in any actual fashion because they've been reaping the rewards of it so far. If you're lucky, supply and demand will get back to normal and you can pick up third party cards at a discount eventually. But failing that, you should at least be able get a vanilla card at MRP from your local nvidia site when they're in stock.

  5. #30
    Member
    Registered: Apr 2002
    Location: Third grave from left.
    Ah, the sweet-sweet confusion.

    Is it fair to say that what you meant to convey was something like: If one has a choice then for AMD, the new APIs should be used as their implementation for earlier APIs is markedly really bad if compared to Nvidia which pays a significantly lesser penalty for earlier APIs. In other words, if you have a choice between equally powerful AMD and Nvidia hardware then pick Nvidia as it is either demonstrably better or just equal (the unlikely worst case scenario).

    That would match this:

    Quote Originally Posted by Sulphur View Post
    AMD's generally better at DX12/Vulkan but not so great at DX11, while nvidia is a bit slower with those newer APIs but outpaces AMD in DX11.
    My suggested version above and yours quoted here are compatible. However, they don't look like it - hence why i though you meant specifically newer API implementation quality.

    Something i would like to see myself too. Especially for Vulkan (my turf) - there has been a lot of wiggling around both AMD and Nvidia camps around it.

    Basically, the early responses from Khronos/AMD/Nvidia to questions in the form of "should i do A or B" was: "which one would you like to prefer?" - to some degree it still holds. Ie. there is enough wiggle-room for usage to dictate what should be done. For example: is it worth to interleave work with Queues (nvidia has 16, AMD and Intel have a max of 1 across all hardware - ie. cannot do anyway) or enough to depend on interleaving work on same queue (only option for AMD and Intel). What difference does it make? How well can they interleave with present drivers?

    Totally unrelated, but relevant to what i said earlier. I noticed that there are now AMD (*) cards that have their transfer queue limitations lifted (i suspect that the possibility for the limitation was added to the Vulkan specification ONLY (**) for AMD as i have never seen ANY other GPU out of 3500 listed with any limitations at all - not even any of the mobiles need it).

    *) http://vulkan.gpuinfo.org/displayrep...#queuefamilies "AMD cape verde" - f* finally has a usable transfer queue.
    **) not the first time something gets added to spec to accommodate AMD junk - i went to Nvidia because of a stunt like that when OGL for AMD/ATI got a max of 4 texture indirections for even their latest and greatest cards when everyone else said: use as many as you want and can cram into max instruction count (they literally just used the same number for the limit - because, well, you have to write something there and the concept is meaningless for everyone else, so write whatever there).

    edit: erm, nevermind. AMD still has the transfer limitations even for "Radeon RX Vega". The "AMD cape verde" was a LLVM version for Gentoo Linux - that hardly counts. Back to "normal" i guess.
    Last edited by zombe; 18th Jun 2018 at 05:42.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •