Building a Beast: A Guide to Creating a Home Computer for AI Training and Inference with Dual NVIDIA RTX 3090 and NVLink
As artificial intelligence (AI) continues to revolutionize industries and transform the way we live, many enthusiasts and professionals are looking to build their own computers capable of handling demanding AI workloads. In this blog post, we’ll walk you through the process of building a powerful computer for AI training and inference at home, featuring not one, but two NVIDIA RTX 3090 graphics cards connected with NVLink, providing a whopping 48 GB of video random access memory (VRAM).
Why Build a Custom Computer for AI?
Before we dive into the build process, let’s quickly discuss why building a custom computer is essential for AI applications. Pre-built computers often lack the necessary hardware and configuration to handle the intense computational demands of AI training and inference. By building a custom computer, you can:
- Choose the best components for your specific use case
- Ensure optimal performance and efficiency
- Future-proof your system with upgradable parts
- Save money by avoiding unnecessary features and costs
Taking Control of Your AI Models: How Running Locally Can Improve Privacy and Reduce Costs
One of the most significant benefits of running your models locally is the improvement in privacy and security. When you use third-party you’re all the time providing them information being code or data that they can later use.
Cloud-based AI services, such as Google Colab, Amazon SageMaker, or Microsoft Azure Machine Learning, offer convenient and scalable solutions for training and deploying machine learning models. However, when you use these services, you’re essentially handing over your data and models to a third-party provider. This can be a concern for several reasons:
- Data breaches: Cloud providers, like any other company, are not immune to data breaches. If their servers are compromised, your sensitive data could be exposed.
- Model theft: Your machine learning models, which may contain valuable intellectual property, could be stolen or reverse-engineered by malicious actors.
- Surveillance: Some cloud providers might collect telemetry data on your model usage, which could be used for targeted advertising or other purposes.
The Cost Savings of Running Locally
In addition to the privacy benefits, running your models locally can also help you avoid costly provider fees. Cloud-based AI services often charge by the hour, and these costs can add up quickly, especially if you’re training large models or performing extensive hyperparameter tuning.
By running your models locally, you can:
- Avoid hourly charges: Once you’ve built your custom computer, you can run your models without incurring additional hourly fees.
- Reduce data transfer costs: You won’t need to upload your data to the cloud, which can save you money on data transfer costs.
- Minimize dependencies: By running locally, you’re less dependent on third-party providers and can maintain more control over your workflow.
The Educational Benefits of Running Locally
Running your models locally also provides an excellent opportunity for learning and growth. When you’re forced to manage the intricacies of machine learning deployment yourself, you’ll gain a deeper understanding of the underlying technologies and concepts.
- Hands-on experience: By working with your local setup, you’ll develop practical skills in areas like model optimization, hyperparameter tuning, and debugging.
- Customization and experimentation: Running locally allows you to experiment with different architectures, frameworks, and techniques, which can help you develop a more nuanced understanding of machine learning.
- Community engagement: As you work through challenges and share your experiences online, you’ll become an active participant in the machine learning community, learning from others and contributing your own knowledge.
Getting Started with Local AI Development
To begin running your models locally, follow these steps:
- Choose a framework: Select a popular machine learning framework like TensorFlow, PyTorch, or Keras, and install it on your local computer.
- Prepare your data: Collect and preprocess your dataset, ensuring it’s in a suitable format for your chosen framework.
- Develop and train your model: Create and train your machine learning model using your local computer’s resources.
- Deploy and test: Deploy your trained model and test its performance on your local setup.
Components Needed:
To build our AI powerhouse, we’ll need the following components:
- CPU: AMD Ryzen 9 7950X 16-Core, 32-Thread Unlocked Desktop Processor
- Motherboard: MSI MAG B650 Tomahawk WiFi Gaming Motherboard
- Memory: 64 GB (or more) of DDR4 RAM, 3200 MHz or higher
- Storage: Fast NVMe SSD (e.g., Samsung 990 EVO) for the operating system and AI frameworks
- Power Supply: 1000 W or higher, 80+ Gold certified (e.g., EVGA SuperNOVA 1600 T2)
- Graphics Cards: 2 x NVIDIA RTX 3090 (48 GB GDDR6X VRAM total)
- NVLink Bridge: To connect the two RTX 3090 graphics cards
- Case: A spacious, well-ventilated case with good airflow and cable management options (e.g., Fractal Design Meshify C)
Okay, okay, can we see a picture?
Geppetto, the AI machine.

Power Consumption:

Ok, so UPS02 only feeds Geppetto (AI Machine), UPS01 on the contrary feeds Storage (my NAS that has a 4070 for AI Image generation, Jellyfin, etc).
Examples:
Self hosted ChatGPT

What models do I run?
For real work:
- llama3.3:70b-instruct-q3_K_M
- deepseek-r1:70b
For simple tasks:
- deepseek-r1:32b
- llama3.2:3b-instruct-q8_0
Have you tried 405b models?
I have, sadly I’ve had horrible performance since I had to use the 200 GB of ram and the performance was unusable, but hey, it worked I guess.
Future plans?
I’ve been integrating Ollama – Self hosted AI in a million things within my daily workflow, from recording, transcribing and creating meeting minutes from any meeting I join in Zoom, Microsoft Teams and Mattermost, to summarizing my emails for me.
This is the first version, I’m working currently on a private repo but my idea is making it public eventually.
https://git.takelan.com/ddemuro/ai-email-assistant
I can only say AI has made my life easier and I can justify the cost of power of running this at home if it means I have an assistant to help me do some mindless tasks.
Recent Comments