Blizzard Entertainment created and released Overwatch, a 2016 team-based multiplayer first-person shooter. Overwatch is a “hero shooter” that divides players into two teams of six and allows them to choose from a vast pool of characters known as “heroes,” each with their unique abilities. Teams compete against each other to complete map-specific goals in a set amount of time.
The prevalence of real-time multiplayer games and the rise of team-based first-person shooters like Team Fortress 2 motivated a portion of the Titan team to develop a hero-based shooter that favored teamwork. Some aspects of Overwatch are focused on the Titan project, which was canceled.
Before the game’s release, an open beta attracted nearly 10 million players. Critics universally lauded Overwatch for its simplicity, hero characters’ diverse appeal, cartoonish visual style, and fun gameplay.
Overwatch is widely regarded as one of the best video games of all time, having won several games of the year awards and other honors. Blizzard funds and produces the global Overwatch League, which is a common Esport.
Overwatch 2, a sequel, was released in 2019 and will feature new cooperative multiplayer modes based on player versus environment (PvE).In this article, we will look at some of the best graphics cards for Overwatch in 2022. We have reviewed different cards from different manufacturers.
EVGA GeForce RTX 2080 Ti
|Boost Clock||1650 MHz|
|Memory Clock||14000 MHz|
|Bus Type||PCIe 3.0|
The EVGA GeForce RTX 2080 Ti is the most popular GPU on the market, whether you’re looking for a more powerful graphics card or want to get a head start on Nvidia’s vision of a ray-traced future.
On this latest GPU, you’ll find a surprising number of new ports. NV Link, which provides fifty times the transferring bandwidth of previous technologies, has replaced the high-bandwidth connector that EVGA has used for multi-card setups for years.
The EVGA GeForce RTX 2080 Ti has 11GB of GDDR6 VRAM, 4,352 CUDA cores, and a boost clock of 1,635MHz, making it nearly twice as expensive as the graphics card it replaces. It’s all because of EVGA’s first-ever self-implemented factory overclock of 90MHz.
The EVGA GeForce RTX 2080 Ti, on the other hand, has 11GB of last-generation GDDR5X VRAM, 3,584 CUDA cores, and a maximum frequency of 1,582MHz.
RT and Tensor cores are two additional types of cores contained in this GPU that its predecessors lacked. Ray tracing is allowed by the RTX 2080 Ti’s 68 RT Cores, which enables this graphics card to make much more complex real-time lighting scenarios and natural shadows than the 1080 Ti ever could.
Meanwhile, 544 Tensor Cores add artificial intelligence (AI) to the mix, which Nvidia hopes to improve anti-aliasing efficiency. Turing, according to Nvidia, is eight times faster than Pascal at processing anti-aliasing thanks to machine learning.
Tensor Cores also power a new technique known as Deep Learning Super Sampling, which can improve resolution while also applying anti-aliasing.
The RTX 2080 Ti has two of these connectors, which can deliver up to 100GB of total bandwidth to support several 8K monitors in the surround mode.
There’s also a newly added USB-C video out port on the rear, becoming more popular among new monitors. Since the USB-C 3.1 Gen 2 port supports UHD video and also outputs 27 watts of power, future virtual reality headsets can only need one cable to power up.
The EVGA GeForce RTX 2080 Ti– and, for that matter, the entire Turing-based RTX series – features the first dual fan cooling device ever seen on an Nvidia Founders Edition (a.k.a. reference) card.
Although blower-style coolers are suitable for isolating heat from the rest of your parts, their cooling ability has traditionally been much lower due to the limited amount of air that a single fan can move. On the other hand, dual and multi-fan systems will carry a lot more air, but they also leave a lot more heat inside your computer case. The controversy about which is better has yet to be settled among those in the computer industry.
On the other hand, Tensor Cores tend to pay off in the long run by reducing anti-aliasing and super sampling overhead.
- Capable of 4K gaming at 60fps
- DLSS is another exciting feature with a ton of potential
- Sets a new bar for single-GPU performance
- The only real option for 4K PC gaming, and the high price reflects that
- Game support for unique ray tracing features will have to wait
- Founders Edition commands a $200 premium over an already expensive base/reference card
MSI Gaming GeForce GTX 1650 Super
|Boost Clock||1755 MHz|
|Video Memory||4GB GDDR6|
|Output||DisplayPort x 3 (V1.4)/ HDMI 2.0B x 1|
The 1650 Super, like the other Super-branded GPUs released this year (the RTX 2080, 2070 Super, RTX 2060 Super, and GTX 1660 Super), is a minor update to an existing component. The TU116 is the essential element that is being used in this situation. If you’re familiar with Nvidia GPUs, that’s a step up from the GTX 1650’s bottom Turing TU117 GPU.
The performance is just what you’d imagine. The 1650 Super is consistently about 30% faster than the standard GTX 1650 in all three test settings and resolutions. It’s still about 30% faster than the old GTX 970 and about 30% faster than the RX 570 4GB, but the advantage drops to 20% at 1440p.
In terms of core specifications, the GTX 1650 Super is a significant improvement over the GTX 1650. It has 42 percent more GPU cores than 1650, and although it retains a 128-bit memory interface, the switch to 12Gbps GDDR6 gives it 50 percent more bandwidth.
MSI’s latest GTX 1650 Super card is almost similar to the GTX 1650 Gaming X, with few differences. Although both cards need a 6-pin PCIe power connector, the 1650 Super configuration differs slightly—it uses the same features as the 1660 cards, which gives sense given that they both use the same TU116 GPU.
In comparison to MSI’s vanilla 1650 card, the 1650 Super has an additional DisplayPort output.
MSI’s 1650 Super Gaming X has a factory overclock, much like many custom cards. It’s not much of a difference in this case: the Boost clock is 1755MHz, compared to 1725MHz for the comparison.
That’s a minor boost, and every other 1650 Super should be able to keep up. However, this does give the possibility of the card being used in different ways.
The VRAM, in particular, is capable of achieving at least 15Gbps, which is MSI Afterburner’s default limit. The GPU’s real-world clock speed is usually much higher than the Boost clock, with most Turing GPUs.
At’stock,’ I saw clock speeds of 1850-1900MHz and was able to overclock another 100MHz. Depending on the game and settings you use, the memory and core will give you an extra 10-15% performance boost. Of course, this is assuming that everything remains stable.
Another factor to keep in mind is that the GTX 1650 Super has a 100W TDP, which necessitates a 6-pin power connector. You’ll have to stick with the vanilla 1650 if your PC doesn’t have any PCIe power cables (which is the case with some budget pre-built PCs).
It’s nearly as fast as 1660 at 1080p medium, but it slows down by 10% at 1080p ultra and 17% at 1440p ultra.
I also remember that the 1650 Super nearly breaks 60 frames per second in every game I checked at 1080p medium (except Metro Exodus), but it’s much less likely at 1080p ultra.
The overall average at 1080p ultra is 61 frames per second, but just around half of the game breaks 60 frames per second, with the other half ranging from about 40 to 55 frames per second.
If you’re looking to play games like Fortnite, Overwatch, CS: GO, or Rainbow Six: Siege at 1080p and maximum efficiency, the 1650 Super will suffice (or close to it). On the other hand, games like Assassin’s Creed, Metro Exodus, and Borderlands 3 will benefit from lowering a few settings.
- Excellent value
- Great OC potential
- Vastly cooler, quiet, and more power-efficient than Radeon GPUs
- No hardware support for RTX and DLSS
- No factory overclock
ASUS GeForce GTX 1060
|Memory Clock||2002 MHz|
|Boost Clock||1709 MHz|
|Base Clock||1506 MHz|
Since none of the other Turing GPUs can fill the function of a budget version of the best graphics cards, the GeForce GTX 1650 was a foregone conclusion. This is most certainly the Turing architecture’s final implementation.
Key features and Performance
It comes in 6GB or 3GB versions, with an impressive 1,280 CUDA cores, 80 Texture Units, and 48 ROPs. It runs at the usual 8GB/s bandwidth on a 192-bit bus. With 1.46 billion transistors, a 120-watt TDP, and a base-spec base clock of 1708 MHz (before GPU boost gets it in its hands), we’ve got ourselves a winner.
The GTX 1650 is powered by the latest TU117 GPU, a smaller and less costly version of the TU116 used in the GTX 1660 and 1660 Ti cards. The GTX 1650 has 4GB of GDDR5 memory, which is clocked at 8GT/s, the same as the GTX 1660 and the previous generation GTX 1060 cards.
It has 128GB/s of bandwidth, slightly more than the GTX 1050 Ti, thanks to four active memory controllers on a 128-bit bus. It also contains 32 ROPs (Render Outputs).
TU117 and the GTX 1650 have 14 SMs in the GPU core, translating to 896 CUDA cores and 56 texture units. The GTX 1650 can perform concurrent FP32 and INT calculations, speeding up gaming workloads by 15-35 percent over the previous Pascal architecture.
Nvidia’s boost clocks are usually conservative, with most cards. The stock GTX 1650 has a boost clock of 1665MHz, which gives it a theoretical score of 2984 GFLOPS. That’s slower than the GTX 1060 cards but about half as fast as the GTX 1050.
Design and cooling
It’s all made on TSMC’s 12nm lithography, leaving AMD’s Radeon VII to use 7nm for the time being. With 4.7 billion transistors, the die size is about a third smaller than the TU116. Although factory overclocked cards (like the MSI GTX 1650 Gaming X 4G that I’m using) have higher clock speeds and require a 6-pin PEG connector, the GTX 1650 is designed to run without one.
The lack of the usual SLI fingers on the top of the card was instantly visible when opening up this beautiful Nvidia packaging.
The transition from Maxwell to Pascal is remarkable. The GTX 1080 provided the high end with a nearly 50% boost over the previous generation’s equivalent, and the ASUS GeForce GTX 1060 follows suit. It’s calm, quiet, and surprisingly well-rounded for a card of this price. Overall, GTX 1060 is a crown and most efficient graphic card for overwatch.
- Brawny performance for a mid-range video card
- Easily overclocked
- Well suited to FHD or QHD gaming
- Expensive among GTX 1060 partner cards
- The card is much bigger than Founders Edition
MSI Gaming GeForce GTX 1070
|CUDA Cores||1920 Silent Mode|
|Base Clock||1506 MHz|
|Boost Clock||1683 MHz|
|Height||5.5″ / 140 mm|
The MSI Gaming GeForce GTX 1070 is just a few steps behind the GTX 970, which was the best choice for your buck graphics card of its generation back in its day. Packed with 1,664 CUDA cores and a boost clock of 1,178 MHz, the 4GB (or 3.5GB if you’re pedantic) GTX 970 was more than capable of powering the best PC games at 1080p – and still is.
However, fast forward to the MSI Gaming GeForce GTX 1070, and it’s taken the crown from its value-oriented precursor.
The MSI Gaming GeForce GTX 1070 is the successor to the GTX 970, which was the greatest bang for your buck graphics card of its generation at the time. The 4GB (or 3.5GB if you’re pedantic) GTX 970 had 1,664 CUDA cores and a boost clock of 1,178 MHz, making it more than capable of powering the best PC games at 1080p – and it still is.
The GTX 1070, on the other hand, has 256 more CUDA cores, bringing the total to 1,920, as well as a higher boost clock of 1,683MHz. Plus, there’s an extra 4.5GB of GDDR5 VRAM.
The MSI Gaming GeForce GTX 1070, of course, is built on Pascal’s mighty 16nm FinFET manufacturing process, which allows for 6.5 TFLOPs of overall output. That relates it on par with some of the best graphics cards on the market, such as the Titan X, but at a fraction of the cost.
The GTX 1070 model we’ve got here is MSI’s top-of-the-line Gaming X, which includes a completely custom PCB, enhanced fan design, LED lighting, and stock overclocking as usual. And a substantial one at that. You can select between three different modes thanks to MSI’s Afterburner software: quiet, gaming, and overclocking.
It’s an excellent follow-up to the previous generation, but there are a few things that irritate us here, not least the presence of RGB LEDs on a card that’s mainly black and red, as you can’t realistically shift the color from red, white, or off. Aside from that, it’s a pretty good update.
However, these noise and aesthetic improvements are arguably insignificant since once buried in your situation. You won’t hear or see it unless you have a tempered glass side panel. This is particularly true when considering the 0dB fan technology, which turns off the fans once the core reaches 60 degrees.
The factory overclocks an extra megahertz to the core clock, resulting in a 20 percent boost in frame rates in-game. That may not seem like more and enough, but when it comes to minimum frame rates, the higher they are, the better your overall experience will be.
- Very quiet
- Fans turn off in idle
- Beats the Radeon RX Vega 56
- High price
- Additional 6-pin power input barely needed
EVGA GeForce RTX 2060
|Memory Clock||1750 MHz|
|Boost Clock||1680 MHz|
|Base Clock||1365 MHz|
Although the EVGA GeForce RTX 2060 is more expensive than the graphics card it replaces, it is also significantly more powerful. This mid-range GPU will fulfill your PC gaming fantasies and get you in the door with 1080p ray tracing.
In reality, compared to the RTX 2060 Super and the RTX 3000 line, the EVGA GeForce RTX 2060 is more affordable than ever before, making it a more appealing choice for budget-conscious consumers.
Key features and Performance
With each new generation of Nvidia graphics cards, there’s always one that outperforms its predecessors in terms of efficiency. It’s the EVGA GeForce RTX 2060 this time.
To begin, the EVGA GeForce RTX 2060 has mastered Full HD and QHD gaming, as well as allowing you to bask in EVGA’s ray-traced future glory.
Despite its higher price, the EVGA GeForce RTX 2060 performs admirably and feels more like a substitute for the Nvidia GTX 1070 Ti.
You’re looking at a much more powerful graphics card with 6GB of the new 14Gbps GDDR6 video memory and 50% more CUDA cores. The RTX 2060 has mastered high-frame-rate Full resolution processing, delivering excellent 1440p outputs and playable 4K gaming hovering around 30 frames per second.
The Nvidia GeForce RTX 2070, AMD Radeon RX Vega 56, and Nvidia GeForce RTX 2080 are the only graphics cards that outperform the Nvidia RTX 2060. You won’t get another mid-range graphics card more powerful and more robust than this until AMD launches the latest AMD Radeon RX 5700 and RX 5700 XT graphics, and Nvidia releases its Super RTX sequence.
The EVGA GeForce RTX 2060 is strong enough to hold frame rates in Full HD gaming well above 60fps, which is excellent news for high-refresh-rate monitor owners. Meanwhile, it offers respectable 4K gaming, with frame rates hovering about 30 fps in our tests.
Despite having the fewest RT and Tensor Cores in the EVGA RTX lineup so far, the EVGA GeForce RTX 2060 can still perform all of the Turing architecture’s latest tricks, DLSS, and ray-tracing. In reality, with the game running at 1080p with Ultra quality settings and ray tracing, this GPU can consistently deliver 70-75 frames per second in Battlefield V.
Design and Cooling
The RTX 2060 is built on a modified version of the Turing TU106 GPU used in the Nvidia GeForce RTX 2070, explaining why. Although you’re having a scaled-down version of a higher-end graphics card, the bulk of its capacity is still available. This is precisely why this new mid-range card is so strong. It runs much cooler than the previous generations.
The EVGA GeForce RTX 2060 is more capable and lasts longer. Furthermore, this graphics card will offer users a glimpse of Nvidia’s latest ray tracing future, but not to the same uncompromising extent as the company’s higher-end cards.
- Better performance than GeForce GTX 1070 Ti
- Stunning Founders Edition aesthetic design
- Great for 1080p and even 1440p
- Higher power consumption than previous-gen cards it replaces
In the last, we will sum up this article of ours by saying that graphics card is the most important component in the field of gaming and if you want to increase the performance in games, the first thing that should come to your mind about upgrading is your graphics card and as long as you are going to play OverWatch, you will be fine with any of these graphics cards.
Top 3 Graphics Cards For Overwatch Comparison Video
Frequently Asked Questions
Is 70 fps good for overwatch?
60fps is good as in minimum acceptable, playable framerate for fast-paced games. For games more graphically intensive than OW, anything that runs at over 70-fps with sync looks smooth to me.
Is overwatch more CPU or GPU intensive?
Based on information gathered during beta sessions, the game is more demanding on CPUs than most games—especially for a first-person shooter.
Does overwatch have a benchmark?
Overwatch isn’t particularly GPU intensive, but it does make use of some advanced shadow and reflection techniques that can impact FPS.