This first post explains when a dedicated server is the right choice instead of a VPS, and how to think about hardware based on your software’s performance dependencies. The goal is to provide the context you need to avoid common bottlenecks: the wrong platform tier or CPU model, insufficient memory bandwidth, or a storage layout that restricts throughput.
Before describing dedicated servers, let’s take a quick look at Virtual Private Servers (VPS). They are often the natural next step from shared hosting: you get your own isolated server environment with dedicated RAM allocation and predictable baseline resources. Crucially, this comes with system-level redundancy. Because we operate a High-Availability (HA) VPS cluster based on Proxmox VE, your workload is protected against hardware failure—if a physical node goes down, your VM automatically recovers on another. However, like any virtualized platform, VPS still runs on shared infrastructure, so under sustained load or very latency-sensitive workloads you may see some variation in performance.
A Virtual Dedicated Server (VDS) sits between VPS and dedicated servers. In a typical VDS setup, vCPUs are mapped 1:1 to physical cores. This reduces CPU contention and makes compute performance more predictable, although other resources (like storage and network) may still be shared depending on the platform design.
Once your workload is latency-sensitive, I/O-intensive, or bound by per-core licensing, shared-host variance starts being the problem. At that point, a dedicated physical server gives you single-tenant control and predictable access to CPU, memory, storage, and the network interface.* You get the full bare metal machine with root/admin access, and you decide how it is configured and operated, whether that is one OS on bare metal, containers with Docker, or a virtualization layer such as Proxmox VE to run multiple workloads efficiently.
Unlike our 3rd Generation VPS, a single dedicated server has no automatic failover. To achieve similar “mission-critical” availability, you would need to architect your own redundancy, typically by deploying a cluster of at least three nodes.
Planning for your workload
Server configuration starts with understanding how the system will be used, and where your software tends to hit its limits. Once you know which constraint dominates, it becomes much easier to choose the right server class and avoid platform mismatch.
Most workloads are constrained by one, or a mix, of five factors:
- Per-core speed and latency (CPU frequency and architecture)
- Parallelism (how well the workload uses more cores or threads)
- Cache behavior (L3 cache and how well data stays “close” to the cores)
- Memory capacity and bandwidth (DDR generation and populated memory channels)
- Storage and network I/O (especially with large datasets, many sessions, or heavy consolidation)
For many web and application workloads, performance depends more on per-core speed and latency than on total core count. If the workload is not highly parallel, a smaller high-clock CPU often feels faster in practice, and is easier to utilize efficiently, than a larger platform designed for heavy concurrency.
From customer conversations, we often see dedicated servers used for consolidation, either as a virtualization host or as a container platform. When the system is expected to run many services in parallel, the choice becomes less about optimizing for one application and more about keeping the platform predictable under sustained mixed load. In practice this points toward configurations with sufficient memory capacity, storage that remains responsive under parallel I/O, and enough CPU capacity that workloads do not spend their time competing for the same cores.
Peak clock speed matters most when you have one or a few latency-sensitive tasks. As consolidation grows, priorities tend to move toward overall capacity, more usable cores, and platform headroom, because the system spends more time sharing resources between many active workloads.
This is why CPU selection is usually a balance between per-thread performance and total throughput. On our EPYC 9000 platform, this balance is reflected directly in the CPU options: high-frequency “F” models in the 24–48 core range, balanced 64- and 96-core CPUs that combine strong per-core speed with high throughput, and dense-core variants with 112 cores or more for highly parallel workloads and higher consolidation density.
Configuring your dedicated server
Our dedicated server lineup is built on single-socket server boards in 2U enclosures, with sufficient airflow for modern CPUs, and is equipped with redundant Platinum-efficiency power supplies for continuous operation.
On our dedicated server page, we show three reference configurations as starting points. Each tier represents a clear step in capability and price. After selecting a tier (Ryzen, EPYC 7000, EPYC 9000), the configurator opens with that platform preselected.
From there you adjust CPU, RAM, storage, and network options within the platform’s limits, and the total price updates immediately to reflect your choices.
Before completing payment, we provision the server and provide access credentials so you can verify the configuration. If anything looks off or you have a question, email us and we will look into it. After activation, you can still request adjustments or upgrades as your needs become clearer or requirements evolve.*
This verification process applies to custom-configured servers. Our upcoming pre-configured instant dedicated servers (launching in 2026) will be available for immediate use upon payment.
CPU choice
The configurator starts with CPU selection, since this sets the baseline for per-core speed, total throughput, and licensing efficiency. High-frequency EPYC “F” models, and in some cases Ryzen CPUs, are usually the best fit for latency-sensitive services and per-thread workloads. Higher core-count CPUs are a better fit when performance comes from parallelism.
Memory capacity and bandwidth
The configurator lets you specify only total memory, not the number or size of memory modules. Yet on modern platforms, total GB and memory bandwidth are not the same thing, a higher memory total does not automatically mean the highest memory bandwidth.
Our approach to memory setup is intentional. We keep the memory layout flexible so we can build from stocked components, keep system pricing stable, and avoid unnecessary delays in delivery. Memory pricing has risen sharply and remains unusually high, so we rely primarily on our existing RAM module reserves. In practice, we tend to hold higher-capacity modules, so smaller memory configurations can reach the requested total with fewer modules, resulting in lower system memory bandwidth.
If your workload is sensitive to memory bandwidth and channel population, for example dense virtualization, larger databases, analytics, or Solana endpoint nodes, tell us when ordering. We will review the build and, where possible, optimize the memory layout for bandwidth, not just total capacity.
Storage configuration options
Storage options differ by tier. On the Ryzen platform, optional mirrored SATA OS drives are available, since the platform provides native SATA ports suited for straightforward boot and OS separation. On the EPYC platforms, we organized storage options into mirrored NVMe pairs (RAID 1). The first NVMe pair is the primary mirrored set, while additional mirrored NVMe pairs add both capacity and I/O headroom for data-heavy or virtualized setups. On configurations that go beyond the motherboard’s direct NVMe port connectivity, additional drives are attached through dedicated storage controllers (HBAs).
We standardize on the Samsung PM9A3 NVMe U2 (2.5-inch) device family across our platforms. They are enterprise-class SSDs designed for continuous workloads, with strong sustained performance and predictable latency. Standardizing on one model keeps performance characteristics consistent and simplifies our operations and inventory.
Capacity also affects the drive’s peak performance envelope. As a reference point, the mirrored 2×960 GB option is rated at around 580K read IOPS and 70K write IOPS, 2×1.92 TB around 850K and 130K, and 2×3.84 TB around 1,000K and 180K. IOPS means “I/O operations per second”. Higher numbers generally help when you have many small, random reads and writes, for example databases, VM fleets, and busy application servers. Real performance depends on workload pattern and queue depth, so these figures should be treated as capability indicators, not guaranteed application-level results.
As a rule of thumb, more drives increase total I/O headroom, and larger capacities in the same drive family often have higher rated IOPS. The exact gain depends on the storage controller model, how the storage is mirrored, the workload pattern, and queue depth, so these figures are best used as directional guidance.
Network and traffic profiles
Network and traffic are selected in the same way as the other components. You choose a port option, which defines the inbound and outbound speed limits, and an included traffic level that matches your needs.
Port options are defined by inbound and outbound speeds. The default option is 1 Gbit/s in and out, while the higher-speed options provide higher inbound speed with a lower outbound cap, for example 10 Gbit/s in and 4 Gbit/s out, or 25 Gbit/s in and 5 Gbit/s out.
Each dedicated server includes 50 TB of combined monthly traffic (incoming + outgoing). You can add fixed traffic packs in the configurator if you expect higher usage. If you exceed your total quota, we do not throttle traffic. However, if you go over your allowance, you will be prompted to upgrade to a higher traffic quota. If you choose not to upgrade, extra traffic will be billed at a flat rate of 10 NOK per TB.
If your workload has a specific traffic pattern, for example high sustained egress, many concurrent sessions, or latency-sensitive traffic, tell us when ordering, and we will help size the right profile.
Since this article focuses on the hardware configuration, for brevity, this post does not cover operating system, control panel, and backup options.
Alexey Nechuyatov
Strategic development, ServeTheWorld
Next article: ServeTheWorld’s guide to Dedicated servers. Part 2. Choosing the right platform