ZOTAC GAMING GeForce GTX 1650

CUDA cores 896
Base clock 1,485MHz
Boost clock 1,665MHz
Memory 4GB of GDDR5

Because none of the other Turing GPUs would perform a budget edition of the better graphics cards, the GeForce GTX 1650 was the foregone product. This is most certainly the Turing architecture's final implementation (at least on 12nm). Nvidia now has everything for ray tracing enthusiasts, from the powerful GeForce RTX 2080 Ti to the more affordable GeForce RTX 2060, mainly with GTX 1660 Ti and GTX 1660 losing the RT and Tensor cores in favor of lower costs.

Like all other Turing GPUs, GTX 1650 will perform concurrent FP32 and INT calculations, speeding up gaming workloads by 15-35 percent (based on the game) over the prior Pascal architecture.

Several improvements have resulted in improved results. In comparison to the 1050/1050 Ti, 1650 has more ram bandwidth and CUDA cores. Second, it runs at a much faster pace. Third, unlike Pascal GPUs, the Turing architecture supports concurrent FP32 and INT calculations, increasing performance by another 10% to 30%. (depending on the game and settings).

The GTX 1650 is powered by the latest TU117 GPU, a smaller and less costly version of the TU116 used in the GTX 1660 and 1660 Ti cards. The memory configuration and number of SMs (Streaming Multiprocessors), which specify the number of CUDA cores, texture units, and ROPs, are the main differences from the 1660 rows.

The GTX 1650 has 4GB of GDDR5 memory, which is clocked at 8GT/s, much like the GTX 1660 and the previous generation GTX 1060 cards. It has 128GB/s of capacity, marginally more than the GTX 1050 Ti, thanks to four active memory controllers on a 128-bit bus. It also includes 32 ROPs (Render Outputs).

TU117 and the GTX 1650 have 14 SMs for the GPU core, translating to 896 CUDA cores and 56 texture units. The GTX 1650.

The stock GTX 1650 has a boost clock of 1665MHz, which gives it a theoretical score of 2984 GFLOPS. Although factory overclocked cards (like the MSI GTX 1650 Gaming X 4G that I'm using) have higher clock speeds and include a 6-pin PEG connector, the GTX 1650 is designed to run without one.

Another interesting thing to note is that the GTX 970 is just marginally faster (1-3 percent on average) than the GTX 1650, owing to architecture changes over the past two generations. The GTX 1650 is more likely to be used in newer games.

Overall, the GTX 1650 performs almost as well as one would hope. It outperformed the GTX 1050 by 57 percent at 1080p medium and 73 percent at 1080p super in my testing. Again, this is mainly attributable to the 1050's small VRAM, as the GTX 1050 Ti is even closer—1650 is about 30% faster.

Pros
  • Very affordable
  • Great performance per watt
  • Over 50 percent faster than GTX 1050
Cons
  • Expensive relative to Radeon RX 570
  • Some versions do require auxiliary power
  • Struggles in more demanding games

EVGA GeForce GTX 980

Base Clock 1266 MHZ.
Boost Clock 1367 MHz
CUDA Cores 2048
Memory Detail 4096MB GDDR5

The GeForce GTX 980 is simple to suggest to gamers looking to create a new rig for elevated gaming or upgrade an older card to achieve steady connection speeds at displays above 1080p, based solely on its efficiency and (comparatively) low power requirements.

Key Features and Performance

The GeForce GTX 980 accelerates VXGI to the point that game designers can compute global illumination in-game, allowing vibrant colors to communicate with in-game objects (such as characters) real-time rather than being reflected onto static things like floors, walls, and furniture.

Compared to the GeForce GTX 680, it has doubled the number of render output units (ROPs). This, coupled with a higher clock speed (on the stock GTX 980, approximately 1,126MHz and 1,216MHz), is intended to improve performance at high resolutions and when high anti-aliasing settings are available.

The AMD card is still high-speed, but it's expected to draw between 270 and 300 watts at load, based on power-connector specifications.

A new anti-aliasing method, a new dynamic lighting effect, and Dynamic Super Resolution are among the features supported by the GeForce GTX 980. Attempts to provide better image quality on lower-resolution displays by rendering at higher resolutions and downsampling the results.

The GeForce GTX 980 features 4GB of GDDR5 RAM that runs at an effective 7Gbps and a lead-up engine that Nvidia claims cuts memory bandwidth demands.

Nvidia is boasting a few additional in-game functionalities with the GeForce GTX 980, in addition to a new architecture that stresses efficiency and support for the upcoming DirectX 12 API (of course).

On a Dell 1080p computer, we briefly tested out Dynamic Super Resolution for a couple of our test games during our training. I was enabling the Nvidia Control Panel option results in higher "virtual resolutions" appearing in the game's configuration dialogues.

Design and Cooling

The GTX 980 model card has three DisplayPort ports, which addresses one of the issues with previous-generation Nvidia high-end cards.

You can attach three of these monitors to a single GeForce GTX 980 card). The HDMI port on this device is also of the brand-new 2.0 type, allowing it to support 4K resolution at 60Hz.

It runs cooler and quieter than comparable AMD cards and requires much less power than similar AMD cards.

The reversible hatch is designed to increase ventilation and cooling systems with several GeForce GTX 980 cards nearby. According to Nvidia, the extra airflow provided by eliminating this portion of the backplate is critical for feeding air to the adjacent card's intake fan.

Final Verdict

Nvidia's high-end Maxwell-based card offers the highest single-chip gaming graphics card output to date by using far less power than the competition. Its price is comparable with AMD's more powerful Radeon R9 290X.

Pros
  • Low Power Consumption
  • Large 14phase PWM
  • Extensive Voltage Control
Cons
  • Test leads not included

Asus GeForce GTX 1050 Ti

CUDA Cores 768
Graphics Clock 1290 (MHz)
Processor Clock (MHz) 1392
Graphics Performance High-6747.

Nvidia expands its 10-series GPU range with six new models with the introduction of the GTX 1050. If you can deal with the 2GB VRAM restriction, the GTX 1050 promises to be the highest overall budget GPU. The GTX 1050 is the best ultra-low-cost GPU. It's adequate for entry-level gaming and light esports, and it should fit in almost every machine sold in the last five years or more.

The same GP107 GPU is used on both cards based on Samsung's 14nm FinFET node. Furthermore, 1050 comes with 2GB GDDR5 memory (depending on the games and settings you choose).

But there's more to it than that. With just 640 CUDA cores, this is the most basic Pascal chip available. This model's lack of cores is compensated for by a high boost clock speed of 1,455MHz.

Pascal has a lot of quality, and it has some incredible results in the end. In the end, the GTX 1050 outperforms the GTX 950. Anyone who sees the GTX 1050 and thinks to themselves, "Yeah, that's just the kind of results I need."

All you need is a full-size slot with an x16 PCIe link to install a 1050 and get a significant performance boost over integrated graphics. It's also a decent HTPC solution, as the card retains all of the HEVC decoding capabilities of the more costly Pascal offerings.

They provide superior raw value to the AMD graphics cards used in the base PlayStation 4 and Xbox One versions. However, most gamers would be best suited by a higher-end card with 3GB or more VRAM due to the card's local video memory.

A tiny heatsink and a single fan keep it cold, and it's housed in EVGA's fashionable but straightforward black and grey plastic shroud. The real kicker is the power use. The GTX 1050's TDP (thermal design efficiency) is only 75W, which means it gets all of its power from your motherboard. During the Hitman benchmark, the whole machine on our high-end, overclocked PC drew just 150W, 61W more petite than the RX 460.

The GTX 1050 is the best demonstration yet of Nvidia's Pascal architecture in use. It may not achieve stellar benchmark performance, but as a replacement part for an outdated or disabled GPU, it's excellent, and it shouldn't necessitate upgrading the power supply. It is the highest low-cost GPU for eSports players.

Pros
  • Extremely efficient
  • Excellent price
  • HDMI 2.0b, DisplayPort 1.4
  • Handles all games at Full HD
Cons
  • No SLI support
  • Some games need to be dropped to Medium settings

EVGA GeForce GTX 750Ti

Cores 640 CUDA
Base Clock 1020 MHz
Boost Clock 1085 MHz
Memory Clock 5.4 Gbps

The 750 Ti is the green team's newest addition to the industry, providing gamers with a low-cost 1080p gaming video card. The 750 Ti outperformed the AMD cards in every benchmark, making it the absolute winner in raw results. Although the 260X is a rebadged Radeon HD 7790, the 750 Ti is based on Nvidia's latest Maxwell GPU architecture. 

Key Features and Performance

Nvidia's 750 Ti is 22 percent better than the 600 series card when both metrics are combined. The 750 Ti supports G-Sync, a new Nvidia technology that allows GeForce video cards to play games without stuttering or screen tearing caused by high refresh rates.

The fact that the 750 Ti doesn't need any extra power connections is much more remarkable. The 750 Ti has one Mini-HDMI port and two Dual-Link DVI ports, which can accommodate three screens. The 750 Ti is a low-power video card that uses only 60 watts.

With its 115W TDP, AMD's R7 260X consumes nearly twice as much power. The 750 Ti's power supply-demand is also minimal, at just 300W, so you won't need a big 500W or 750W beast to control it. Because of this, and because of its limited scale, it's a simple improvement for an extensive range of systems.

The card has 512 CUDA cores, a 1020MHz base clock, and a 1085MHz boost clock. The 750 Ti also has 2GB of GDDR5 video RAM running at 5400MHz. If you're going to use different monitors, this 2GB Ti version is the way to go, so you'll have more video RAM space to work with.

Using EVGA's Precision X overclocking tool, I overclocked the 750 Ti by increasing its boost clock to 1169MHz. There were no noticeable reliability problems, and overall performance increased by around 5% for all titles, except Batman: Arkham Origins, which only saw a 3% performance improvement. 

\ Design and Cooling

The card is just 5.7" (14.5cm) in length and has just one slot. The noise and cooling were not an issue, and it wasn't significantly quieter than when it was stock clocked. With the Steam Box revolution just around the corner, that's a wise design decision.

Final verdict

With the 750 Ti, Nvidia has brought it to AMD, uses less fuel, and outshines both the R7 260X and the HD 7790 in competitive and digital comparisons by a margin of 11%.

Pros
  • Excellent power consumption
  • Cool and very quiet
Cons
  • No SLI support

Sapphire Radeon NITRO Rx 460

TDP 75 W
Memory Clock 1750 MHz
Memory Speed 7 Gbps effective
Boost Frequency Up to 1200 MHz

It's built on AMD's fourth-generation GCN architecture, but it's been rebalanced for more power-conscious applications. With up to 2.2 teraflops of performance, it can compete.

Key Features and Performance

The RX 460's processor runs at a base clock rate of 1090 MHz, with a boost clock frequency of 1200 MHz. Our U.S. lab received Sapphire's Nitro Radeon RX 460 OC, which is overclocked to 1175 MHz base and 1250 MHz boost frequency. Next to the GPU are four 1GB memory ICs, totaling 4GB of GDDR5. They aren't mentioned on Micron's parts list, so all we know about them is that they have an 8Gb density and run at 1750 MHz.

Meanwhile, the GPU's power phases are identical to those seen on the Asus Strix RX 470. The high side is handled by one M3054 N-Channel MOSFET, while two M3056 N-Channel MOSFETs handle the low side. UBIQ is the source of both of them.

The current is monitored and regulated by an ITE 8915FN in the Strix RX 460. Even if it should have been a lot shorter, the board's arrangement is simple, orderly, and quite well out. The materials are what you'd imagine at this price point, but there aren't any flaws to be found.

Design and Cooling

Even though Polaris 11 shares some characteristics with its predecessors, the switch to 14nm FinFET inherently means we're looking at a new GPU. Compared to Polaris 10, which has 5.7 billion transistors on a 232 mm2 die, the Radeon RX 460 processor has three billion transistors on a 123 mm2 die. 

There's also a six-pin power adapter that has been rotated 180 degrees. The four-pin connector for PWM-driven fans that are controlled by card heat also makes a return.

Over its rear bracket, Asus reveals three display outputs, removing one of the dual-link DVI ports used on the Strix RX 470. Just one dual-link DVI-D output, HDMI 2.0b, and a DisplayPort 1.3 (HBR3/1.4 HDR-ready) connector remain on the Strix RX 460. This is sufficient for 4K resolution at 120 Hz and 5K resolution at 60 Hz.

The more extraordinary cover, which has two 75 mm fans, is fixed on with four screws. The PWM-controlled fans have 3W engines, but our measurements indicate that they don't exceed 4W combined during stress tests. The tachometer control signal is given by the first one.

Final Verdict

RX 460 is the hottest edition in the GPU list because it competes with the previous 600 series. The materials are what you'd imagine at this price point, but there aren't any flaws to be found.

Pros
  • Fast enough for 1080p in many games
  • Affordable
  • Power Friendly
Cons
  • Performance has limits, even at 1080p
  • On par with year-old GTX 950 in terms of performance

Gigabyte GeForce GT 1030

GPU Variant GP108-300-A1
TDP 30 W
GPU Name GP108
Memory Clock 1502 MHz

The GeForce GT 1030 from Nvidia performs admirably in the competitive games for which it was made. Furthermore, the card's low price, power-efficient architecture, and appealing form factor make it open to almost anybody with a PCIe slot. To reflect Nvidia's new entry, Gigabyte sent over its GeForce GT 1030 Low Profile 2G.

Key Features and Performance

The GeForce GT 1030 is powered by the GP108, a brand-new graphics processor with 1.8 billion transistors. It's a tiny object, measuring only 70mm2, and it's made using the same 14nm FinFET method as the GP107. In comparison, the GK208 chip in the GeForce GT 730 has 1.02 billion transistors in an 84mm2 die.

In the Pascal and Maxwell architectures, each SM/SMM has 128 CUDA cores, resulting in 384 cores for the GT 1030. The GeForce GT 1030 also exposes eight texture units per SM for 24 texture units. There are two ROP partitions on the GPU, allowing for up to 16 32-bit integer pixels per clock.

On the other hand, those partitions are aligned with 256KB L2 cache slices on the GP108 and 1MB L2 cache slices on the GM107. That means the GeForce GT 1030 has a limit of 512KB L2. According to the GT 1030's specifications, the memory bus is divided into two 32-bit controllers, totaling a 64-bit interface.

With higher clock speeds, Nvidia goes a long way to making up for those shortcomings in the GT 1030. Our sample has a base frequency of 1227 MHz and a GPU Boost rating of 1468 MHz. The GP108 graphics processor and built on a 14 nm process, supports DirectX 12 in its GP108-300-A1 version. Both modern games will run on the GeForce GT 1030 as a result of this.

Three hundred eighty-four shading units, 24 texture mapping units, and 16 ROPs are used in it. The GeForce GT 1030 is equipped with a 2,048 MB GDDR5 ram and a 64-bit memory controller from NVIDIA.

Design and Cooling

The card comes with a full-height slot frame, but it also comes with a half-height bracket for slim enclosures. While ours is an actively cooled model, Gigabyte also offers a passive model with the same clock speeds. Passively cooled and low-profile.

A PCI-Express 3.0 x4 interface connects the GeForce GT 1030 to the majority of the device. The card measures 145 mm by 69 mm by 18 mm and has a single-slot cooling solution.

Final Verdict

This is a great chip! Seeing as it performs nearly on-par with the 750Ti, it has approximately double the performance/power ratio. It is also perfect for a noiseless PC; the passively cooled one.

Pros
  • Starts at $70
  • Remarkably low 30W TDP
Cons
  • Vulkanbased games