12.5 C
Berlin
Wednesday, December 10, 2025

A quarter of a century ago, a student connected 32 GeForce graphics cards to play Quake 3. That’s how CUDA came into being.

Follow US

80FansLike
908FollowersFollow
57FollowersFollow

An idea from a Stanford student in 2000 led to the birth of CUDA—the technology on which the entire AI revolution is based today. It all started with Quake 3 in 8K.

It was a quiet Thursday evening at Stanford University in 2000. Student Ian Buck was faced with an impossible problem: playing Quake 3 in true 8K resolution. Not just playing it, but rendering it on eight projectors simultaneously.

His solution hovered between proverbial genius and madness: he connected 32 Nvidia Geforce graphics cards to form a render farm. What sounds like a simple episode of an ambitious nerd, however, planted the seed for one of the most valuable technologies of the 21st century. Buck’s idea paved the way for Nvidia’s CUDA technology – the platform that today forms the backbone of almost all major AI systems.

 

From a gimmick to a scientific vision

For Buck, the Quake 3 experiment was a turning point. He recognized something that hardly anyone understood at the time: graphics processors could do much more than just draw triangles – they could become universal computing machines. With this insight, the computer science student delved deep into the technical specifications of Nvidia’s chips and started his doctoral project (via  Xataka).

  • The result: Together with a small group of researchers and supported by a DARPA grant, Buck developed an open-source programming language called “Brook.” This language was able to transform graphics cards into decentralized supercomputers.
  • Suddenly, it was possible to perform parallel calculations on GPUs, for example, by having one unit illuminate polygon A, another unit rasterize polygon B, and a third unit store the data.

A paper entitled “Brook for GPUs: stream computing on graphics hardware” (available via Stanford University) followed—and caught the attention of one particular person: Nvidia founder Jensen Huang.
He immediately recognized the enormous potential and brought Buck directly to Nvidia.

The year 2005: Silicon Graphics collapses, worn down by Nvidia. Today, all that remains of the US computer manufacturer is the OpenGL specification.

Around 1,200 former SGI employees flocked to Nvidia’s research department. Among them was John Nickolls, a pioneer of parallel processing whose previous project had failed, but who now formed a new project alongside Buck.

  • This project was given a name that initially caused more confusion than clarification: “Compute Unified Domain Architecture,” or CUDA for short.
  • In November 2006, NVIDIA released the first version of this free software—but exclusively for its own hardware partners.

The initial euphoria quickly faded. In 2007, CUDA was downloaded just 13,000 times. The millions of Nvidia users wanted to use their graphics cards exclusively for gaming. CUDA programming proved to be complex, and the investment seemed hardly profitable. Internally, too, the project consumed considerable resources without producing any significant results.

The long road to the AI revolution

In the early years, CUDA certainly did not develop into AI technology – artificial intelligence was hardly even a topic of discussion at the time. Instead, it was research laboratories and scientific institutes that used CUDA.

However, Buck himself had already said in an interview with Tom’s Hardware in 2009 that he already had an idea of where the journey might lead:

We will see opportunities in personal media, such as image and photo classification based on content – faces, places – operations that require enormous computing power.

Whether Buck’s prediction was spot on remains to be seen. At least he didn’t have to wait too long to find out.

  • In 2012, two doctoral students named Alex Krizhevsky and Ilya Sutskever, under the guidance of Geoffrey Hinton, presented a project called “AlexNet.”
  • This software could automatically classify images based on their content—an idea that had previously been considered computationally impossible.

The key point: they trained this neural network on NVIDIA graphics cards with CUDA software.

At this point, at the latest, two worlds merged. CUDA and artificial intelligence suddenly made sense. The rest is history: a Stanford student’s absurd idea became the technology on which millions of AI systems run today – and made NVIDIA the most valuable tech company in the world.

Thomas
Thomas
Age: 31 Origin: Sweden Hobbies: gaming, football, skiing Profession: Online editor, entertainer

RELATED ARTICLES

A Japanese historian challenges Mount & Blade: His huge medieval role-playing game is now available on Steam.

The complex medieval sandbox from a former cyberpunk developer launches this week. We summarize what makes this simulation so...

The huge shooter sensation of 2021 crashed and burned with its second installment, but is now returning with radical...

The shooter Splitagte 2 is undergoing a massive overhaul, losing its hero shooter elements and celebrating its comeback this...

New on Steam: An MMO heavyweight launches this week – but many already suspect a scandal.

A controversial MMO highlight launches in Early Access on Steam – along with many other exciting new releases. An exciting...