Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Need help with choosing setup for VDS infrastructure. Also looking for offers.
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Need help with choosing setup for VDS infrastructure. Also looking for offers.

fragpicfragpic Member
edited March 2021 in Help

So, we currently have a Windows VDI setup using RDS running on a VPS running on a local Proxmox node.
The VM is provisioned with 16 cores(Xeon 2673V2),80GB Memory and around 100GB of storage.
I want to move this VM to an off site location as more users mean more hardware cost and more physical maintenance. Also, we want to eventually phase out this setup, so I don't want to invest in additional hardware.

Considering this scenario, which options do you guys suggest?
I am debating between a VPS and a dedi and want to know what you guys think.
Can you also suggest how I can compare these Xeon cores to the newer cpus so I can spec accordingly.
Also, I'm based in India and need latency to be less than 100ms so looking for a solution based in APAC.

Help me
  1. VPS or Dedi21 votes
    1. VPS
      19.05%
    2. Dedi
      52.38%
    3. I just like to click buttons
      28.57%

Comments

  • 10301030 Member

    I would go with a dedicated server and then make VMs.

  • @1030 said:
    I would go with a dedicated server and then make VMs.

    It is only going to be one large VM. Atleast that's how it is setup right now.

  • The connection would be the one major concern with a shared VPS environment simply because you are having users connect to a remote desktop connection (or virtual desktop environment, either way, it uses RDS) and considering that a VPS solution has a number of other customers on the same node running their VPS servers and all ingress and egress traffic going out over the same physical connection in and out of the hypervisor (all of which I am sure you are aware of)... I would definitely want my own dedicated resources. You can get a sub $100 dedicated server, the Windows license is going to cost the same as Windows OS running on a VPS, and you will have dedicated resources and in your case the most important dedicated resource is your dedicated physical NIC and traffic being the only traffic passing through the physical NIC on the server eliminating the possibility of other users affecting the connection which obviously would directly affect the user experience considering it is a virtual desktop. I recommend against running this on any shared platform.

    Will you need just the 1 x server? If so I would go ahead and eliminate the virtualization layer and run it on a dedicated instance of Windows running on a physical dedicated server. If you have the need for other hosted services and could make use of multiple servers in the form of VM's then a hosted Proxmox solution would obviously work well. I am a huge fan of Proxmox and a virtual environment but only when I can take advantage of the features and use it to run a solution more easily and efficiently. Heck if you want a virtual solution you may as well go with a Dedicated Private Cloud via a fully redundant PVECeph solution (Proxmox and Ceph running hyper-converged)) that has HA, Automatic Failover, and no single point of failure. It is by far our best-selling Dedicated Private Cloud, works very well, stable, and also probably the most sustainable fully redundant with HA and Failover considering you have to pay $1000's of dollars in license fee's for a very similar setup using Vmware and vSAN technology (severely over priced and overrated).

  • Correction, the PVECeph solution is by far our best selling private cloud solution...not the best (in the world)

  • Wait, you are running the VM currently with 16 cores and 80GB's of RAM. Not sure what your usage actually is but if it's anywhere remotely near what you listed or over 4 Cores and 8GB's of RAM then there is should be no debate in your mind, a dedicated server would be the proper move. You can run Proxmox on your dedicated if you want, that would be fine but you will need the dedicated resources or else you will be paying more for a shared VPS server than you would your own dedicated server...it's a pretty clear call.

  • fragpicfragpic Member
    edited March 2021

    @hivelocitylee thanks a lot for the detailed write up. I didn't even think about the network congestion issue. My issue with a dedicated server is the maintenance and setup required, like setting up raid and also the fact that it becomes another instance that has to be constantly updated on top of the Windows VM itself and the lack of HA(no fall back if physical node goes down).
    Talking about the VPS, I was referring to dedicated VPS similar to the root server offering by netcup where there is a 1:1 mapping between physical and virtual cores. They would take care of all the backend setup like raid, updates, node failures etc. and I'd only have to worry about the actual VM.

  • For the amount of RAM have a look at a dedicated server definitely, something like an Intel E offers the same amount of cores if you go for a E-2278G or E-2288G for example, and will offer better performance than the older E5 as well by a mile.

  • first of all that depends on the work type your users are going to perform inside RDS/VDI.
    You can use a single server with appropriate RDS licensing allowing multiple terminal connections. So all of your "users" will connect to a single server not multiple.
    Another part is your budget. If it's enough go with a cloud solution and all hardware questions you mentioned(like raid, node update, etc) will be solved. most cloud providers provide HA, DRaaS, backup solutions, etc. It all depends on the specs you require.
    If all work related to basic office stuff, no big difference in CPU type(if it's not old of course).

  • @jessicaAcehost The RDS instance with this spec is already being used on prem right now without any issues.
    As you said my goal is to offload the node maintenance to the provider.
    The problem with this is that I cant really find a VPS provider that can offer 128GB of ram and dedicated cores in APAC.
    Netcup root servers are literally perfect for my use case both in terms of pricing and specs, but the problem is with latency.

Sign In or Register to comment.