Elon Musk's artificial intelligence company xAI is pivoting its core business toward building large-scale data-center infrastructure and producing custom AI chips [1, 2].

This shift represents an attempt to vertically integrate the AI stack. By controlling the hardware and the cloud environment—a model described as a "neocloud"—xAI aims to capture the economics of the industry and reduce the pricing power held by Nvidia [1].

Central to this strategy is the Terafab, a facility where xAI will manufacture its own chips [1]. While this internal production is intended to limit dependence on outside vendors, the company continues to seek massive external funding for existing hardware needs. xAI raised $10 billion [2] and is seeking up to $12 billion in additional financing to purchase Nvidia GPUs [2].

The company's infrastructure ambitions extend beyond Earth. Plans are in place to deploy space-based data centers by 2035 [1]. This expansion suggests a long-term strategy to move computing power outside of traditional terrestrial constraints.

These infrastructure developments come as xAI continues to scale its software offerings. Recent data indicates that Grok 4 per-token pricing remains more expensive than Gemini 2.5 Pro [3]. The transition to a neocloud model may be a move to lower those operational costs over time.

TechCrunch said that Musk's version of a neocloud is more ambitious than typical industry iterations [1]. The company is currently balancing the immediate need for third-party chips with the long-term goal of total hardware independence [1, 2].

xAI will be making its own chips at the Terafab, which will take away some but not all of Nvidia's pricing power.

xAI is attempting to evolve from a software-first AI lab into a full-stack infrastructure provider. By combining custom chip fabrication via the Terafab with an expansive data-center footprint—including the theoretical leap to space-based computing—Musk is positioning xAI to bypass the 'GPU tax' imposed by Nvidia. If successful, this vertical integration could significantly lower the cost of training future models, though the immediate reliance on billions in financing for Nvidia chips shows that total independence is still years away.