Is AI in its Edison vs. Tesla Moment?

Charlie Greenman
3 min read1 hour ago

--

In the race toward Artificial General Intelligence (AGI), there’s an uncanny echo of a historical rivalry — Edison vs. Tesla. Today, tech companies are scaling up their GPU fleets, hoping sheer computational power will be the key to unlocking AGI. But is this the best path forward? My intuition tells me otherwise. Scaling up GPUs may not be the silver bullet. In fact, we might reach AGI with smarter, more efficient architectures, without needing endless GPU arrays. Sounds crazy? Let me explain.

Inexhaustible AI? Maybe what could have been

GPU Scaling vs. Better Architecture

The prevailing approach today, from OpenAI to DeepMind, is to throw more GPUs at the problem. The reasoning seems sound: more computational power should logically produce better results. But this is where the story takes a turn. Did you know that researchers have recently discovered that Chain-of-Thought (CoT) prompting when fed back into a smaller model like GPT-2 can generate GPT-4-like results? This suggests that we may not need to scale up models infinitely but instead refine how these models process information.

The truth is, the industry is hedging its bets on the safe option — more GPUs. Why? From a business perspective, it’s a no-brainer. The first company to achieve AGI will capture the user base and control the future of AI. It’s a classic business strategy: whoever scales up first wins. But scaling GPUs isn’t necessarily the best technological choice.

The Edison vs. Tesla Parallel

Let’s take a step back. Edison’s approach to electrical power was brute force — direct current (DC) systems that required enormous infrastructure. Tesla, on the other hand, championed alternating current (AC), a more efficient, scalable system that ultimately proved to be the better solution for powering the modern world. But at the time, Edison’s DC had the business backing and the momentum.

I see a similar narrative playing out today. The brute-force GPU scaling strategy is Edison’s DC: more expensive, resource-intensive, but business-friendly. The alternative — better architecture requiring fewer GPUs — feels like Tesla’s AC. It’s more elegant, efficient, and would make AGI accessible to the broader population at lower costs.

The Cost of Following Edison’s Path

The current trajectory is leading us toward massive energy consumption and expensive infrastructure to maintain these GPU farms. If we continue down this path, we risk making AGI a luxury product — something only the wealthiest nations and companies can afford to develop and deploy. But what if we could have smarter AI without the need for supercomputers in every lab?

The potential is there. Researchers are already proving that even older models like GPT-2, with the right techniques, can match the performance of models that took millions of dollars to train. This is a reminder that innovation doesn’t always come from brute force. Sometimes, the smarter choice is the one that challenges the status quo.

Conclusion: Will We Choose Tesla?

Historians may look back at this moment as the AI equivalent of the Edison vs. Tesla battle. Will we opt for the brute force, business-friendly path, or will we eventually pivot toward a more refined, efficient approach to AGI? The choice may seem like a technical one, but the implications are far-reaching. Cheaper, more accessible AI benefits everyone — not just the companies who win the race.

We’re at a crossroads, and as history shows, the most obvious choice isn’t always the best one in the long run.

--

--