If you are paying monthly subscription fees for AI services like ChatGPT and Claude, this is news worth a closer look.
A high-end graphics card that cost well over 10 million won per unit eight years ago is now being sold on the used market for around 140,000 won. That is a price point at which you can run AI comparable to ChatGPT directly on a regular home PC.
The overseas YouTube channel “Hardware Haven” recently revealed an experiment in which it obtained an NVIDIA V100 data-center graphics card for 140,000 won and connected it to its PC.
It is a 16GB memory model. The significance is not small, as buying one means you no longer have to keep paying monthly AI subscription fees.

High-end GPUs that have come down from data centers to home desks
NVIDIA V100 (Photo = NVIDIA)
The V100 is a graphics card launched by NVIDIA in 2017 exclusively for data centers. At the time of release, its price exceeded $10,000, or about 14 million won. It was a core component used to train large AI models such as ChatGPT.
Eighteen years later, this card is trading for around $100, or about 140,000 won, on eBay, a U.S. used-goods marketplace. The price drop is the result of businesses dumping large quantities of older V100 cards as newer data-center GPUs have been released one after another.
The problem is that buying it is not the end of the story. There are two types of V100. One is designed to be plugged directly into a normal PC, while the other is a special data-center-only specification called “SXM2.” The model that has fallen to 140,000 won is the SXM2 version, which cannot be connected to a regular PC in the usual way.
The solution is an “SXM2-to-PCIe adapter,” a conversion board that serves as a bridge allowing the data-center card to be connected to a home motherboard. It costs about 140,000 won. Adding cooling fans and a cover made with a 3D printer to prevent overheating brings the total cost to around 330,000 won.
Considering that NVIDIA’s RTX 3060 (12GB), a popular gaming graphics card, sells for 400,000 to 500,000 won when new, this is a similar or even cheaper price range. The key point, however, is that its AI performance came out higher than that of the RTX 3060.
Why local AI is drawing attention: “Running ChatGPT on my own computer”
The benchmark results released by Hardware Haven became a topic of discussion after being cited by U.S. tech outlets Tom’s Hardware and VideoCardz.
Using the free AI execution tool Ollama, the V100 processed OpenAI’s publicly released 20-billion-parameter model, “GPT-oss-20b,” at a speed of 130 tokens per second.
Under the same conditions, the newer Radeon RX 7800 XT managed just 90 tokens per second. In another test with the model “Gemma 4 E4B,” the V100 scored 108 tokens per second, while the RTX 3060 (12GB) recorded 76.
To put it simply, imagine a 10,000-won packet of ramen being cooked on a 500,000-won induction stove and a 300,000-won gas burner. The gas burner turning out to be faster is the kind of reversal that is happening here. It is a case where the cheaper option delivers better performance.
The background to this trend is a new way of using AI called “local AI.” Until now, AI services like ChatGPT, Claude, and Gemini have all worked by connecting over the internet to distant data centers. Users pay a monthly subscription fee, send questions to company servers, and get answers back.
Local AI is the exact opposite. The AI model is installed directly on your own PC and used without an internet connection. Once installed, there are no additional costs. Because your questions do not flow out to external company servers, privacy is better protected. It is also free from the censorship policies under which AI companies may refuse to answer certain questions.
It is also attractive to businesses. That is because customer data can be processed internally without being sent outside the company. In fields such as medicine, law, and finance, where information leaks can be critical, the need for local AI becomes even greater.
Practical limits and alternatives consumers should know
There are downsides. In South Korea, obtaining a V100 usually depends on overseas direct purchases. The common method is to buy the SXM2 card and adapter separately from eBay in the United States and import them. Once customs duties, shipping fees, and the difficulty of exchanges and refunds are taken into account, the actual cost can be higher.
Technical barriers are also hard to ignore. Installing the adapter board, making a cooling cover with a 3D printer, setting up operating-system drivers, downloading AI models, and getting them to run are all challenging tasks for non-specialists.
The card’s power consumption is also high. It uses 250 watts while operating, and it continues to draw a substantial amount of electricity even when left on. If it runs 24 hours a day, monthly electricity bills can rise noticeably.
The answer is a step-by-step approach. There is no need to start with a V100-class device from the beginning. Even a small computer such as a Raspberry Pi can run small-scale AI models. The speed is slow, but entry is possible at a cost of around 100,000 won.
More free programs that simplify installation are also emerging. Tools like Ollama and LM Studio can install open-source AI models on a PC with just a few clicks. Users can proceed as if installing a normal program, without entering command lines. Open-source models that support Korean are also gradually increasing.
Now is the right time to enter while the price bubble has deflated: in the long term, it is an “investment”
Experts believe the V100 price will not stay at this level for long. As demand for local AI rises, interest in used data-center GPUs is increasing, and inventories are being absorbed quickly. Once the market recognizes this arbitrage opportunity, prices could rise again.
For that reason, some say now is the right time to get in. Considering that the annual subscription fee for paid AI services like ChatGPT Plus is around 300,000 won, calculations suggest that a V100-based PC could break even in one to two years. After that, it would effectively become “free AI.”
Of course, cloud AI has clear strengths. You can use the latest models immediately, there is no hardware management burden, and access is possible from anywhere. Local AI sacrifices all of that convenience in exchange for control and lower costs.
The important thing is that “more choices” now exist. In the AI era, consumers can decide for themselves whether to pay monthly subscriptions, make a one-time investment and use it for life, or combine both approaches.
Local AI is no longer just a space for a small number of developers. With used parts worth around 100,000 won and a bit of time, we are entering an era in which anyone can have their own AI.