Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Any interest in "Virtual Metal" or virtualized dedicated hardware?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Any interest in "Virtual Metal" or virtualized dedicated hardware?

MannDudeMannDude Host Rep, Veteran
edited February 2023 in General

They go by many names. "Virtual Dedicated Server", "Virtual Metal", "Smart Server", etc, but essentially, they're all the same thing.

I'm not referring to getting a couple dedicated cores from a VPS product, but rather, a full dedicated server with a virtualized layer.

The idea is that we may change how we offer dedicated servers currently and possibly stop giving 100% full root access to the bare metal host node by default and instead offer a 100% dedicated KVM VPS with specs equal to the bare metal hardware and VirtFusion control panel access for you to manage your server. Possibly allow for 100% full metal access with limited support or help for those who request it.

The benefits of this, is actually more control options for your server over what is available now. (Install from ISO, or from more OS template options, better graphing and potential direct access to backup space / snapshots). Additionally, this setup makes it easier to migrate to larger hardware if needed or between locations if needed and allow us to more easily monitor hardware health and errors.

Downside is a slight hypervisor overhead eating up a couple GB of RAM and minimal CPU.

In theory, the performance should be practically identical and since most dedicated servers are 32+ GB of RAM and many CPU cores, the overhead of the setup would be of minimal concern.

We currently sell hardware in Netherlands from Worldstream using our own AS and IP space, however the end-user control functions are limited through their API to reloading the OS from a limited number of options (No ISO install, for example), PTR settings, bandwidth graphs and some basic power controls. And soon, we'll have some options available in the United States as well. This seems like a suitable way to also keep the user experience and feature set the same between both locations while we help our upstream datacenters move their own hardware, just on our own network / IP subnets.

If we move forward with instead pushing "Virtual Dedicated / Virtual Metal" then you'd have those features and more, mainly the ability to utilize recovery options, VNC, install from ISO, better / prettier graphs, ability to scale up if needed or migrate to a new location, as well as have access to snapshots that you can take and restore. Additionally, if you have multiple IP addresses we could split your dedicated machine up for you into 2 or 3 KVM VM's of the specs you desire, and these could individually be migrated to their own hardware in the future if needed as well.

So, if the cost was the same, would you choose this option when searching for a dedicated server? As mentioned above, we'd likely have the option for you to opt-out but it'd come with limited support and assistance. At that point you'd basically just get SSH credentials to your bare metal machine and some limited features depending on location. The Virtualized route would allow you to pretty much help yourself in most cases and give us a sane way to manage a growing number of dedicated server customers.

Thoughts?

Thanked by 1ChrisMiller
Would you use a "Virtual Metal" type server?
  1. Is this a product / service you would consider if you were in the market for a dedicated server?90 votes
    1. Yes, I would consider it or I would use it.
      65.56%
    2. No, I wouldn't use it and my reasons are listed in the comments.
      34.44%
«1

Comments

  • I would use a Virtual Metal type server depending on the price.
    If I could install Windows server on it, instead of Linux would make it more appealing to me.

  • MannDudeMannDude Host Rep, Veteran

    @Nanja said:
    I would use a Virtual Metal type server depending on the price.
    If I could install Windows server on it, instead of Linux would make it more appealing to me.

    There wouldn't really be a difference in price. It'd cost us a $1.50/mo per VF license for each node, but I think it can be something we can generally eat and find it worth it since on our end, management will be easier and it allows us to offer more attractive features (like bringing your own ISO, Windows included) than we can offer now.

  • MannDudeMannDude Host Rep, Veteran

    The poll disappeared. I've readded it.

  • MrRadicMrRadic Patron Provider, Veteran

    With current tech, what specifically can't be achieved with just bare metal nowadays?

    Thanked by 2darkimmortal MikeA
  • MannDudeMannDude Host Rep, Veteran
    edited February 2023

    @MrRadic said:
    With current tech, what specifically can't be achieved with just bare metal nowadays?

    Namely how we manage the servers and the feature set that we can offer the end user.

    We resell hardware from our upstreams using our own IP space and have some custom configs they offer us for this purpose, but they're two different providers on two different systems. In one location we have limited API access so users can manage some basic functions from WHMCS like rebooting their server, seeing BW graphs or set their PTR records. In the other location, we can't offer this. If you're a user who has servers in both locations, it may be confusing and require more disclaimers on our end that the two different locations have two different end-user feature sets.

    It's just sort of a pain, that way, and can be confusing to the end-user who may wonder why they can perform XX function in NL, but not that same function in the US, for example.

    The idea behind doing the 'virtual metal' (or whatever) is giving the user a clean UI to manage everything from, a consistent experience regardless of service location, and with more features available to them. This way they can easily reinstall their OS, use a remote console if needed, take a snapshot / backup, see historical graphs, migrate to different hardware later without having to rebuild, etc. Not all of this would be possible otherwise.

    I'll still probably offer 100% bare metal, same cost, but it'll be more of a, "Heres your SSH credentials. Open a ticket if you need us to reboot or reload your OS but don't expect us to want to do this for you a dozen times a month when we gave you the option to manage this yourself for free." type of thing.

  • I would not consider this just because I know there would be an OS running on top of what I'm seeing and I would be restricted from this system which in theory could be used to look/monitor/backup anything which I host and store without my knowledge.

    The lack of performance is not the issue, the issue is that you are selling private space (i.e. dedicated server) which has a hidden layer that can access everything bellow itself.

    You probly have no issue if you live in apartment building and you have neighbor that constantly listens whetever or not your dog barks when you are at work, but if you replace that wall with a one way glass, which allows your neighbor to monitor your and your dogs life whenever he wishes to look into that way, you won't like it either.

    When something is sold only to you, there should be nobody else in the same space.

    I would never save my beach/high school images and other personal photos/memories into some VPS server. I do save them in dedicated servers though since I would at least know if someone had potentially copied them or ejected the drive.

  • MannDudeMannDude Host Rep, Veteran

    @stefeman said:
    I would not consider this just because I know there would be an OS running on top of what I'm seeing and I would be restricted from this system which in theory could be used to look/monitor/backup anything which I host and store without my knowledge.

    The lack of performance is not the issue, the issue is that you are selling private space (i.e. dedicated server) which has a hidden layer that can access everything bellow itself.

    You probly have no issue if you live in apartment building and you have neighbor that constantly listens whetever or not your dog barks when you are at work, but if you replace that wall with a one way glass, which allows your neighbor to monitor your and your dogs life whenever he wishes to look into that way, you won't like it either.

    When something is sold only to you, there should be nobody else in the same space.

    I would never save my beach/high school images and other personal photos/memories into some VPS server. I do save them in dedicated servers though since I would at least know if someone had potentially copied them or ejected the drive.

    I can understand the concern. We'd still offer traditional bare metal access, but it'd be basically just us handing off SSH creds with limited end-user control, at least in the US location. For those who know what they're doing and don't need support, this would suffice. In NL there would be some basic functions available from WHMCS for the bare metal control.

    Alternatively, I'd say you installing your server from ISO and using FDE upon setup would also suffice, which is what I personally do for all my VMs with us and other providers.

  • crunchbitscrunchbits Member, Patron Provider, Top Host

    @stefeman said:
    I would not consider this just because I know there would be an OS running on top of what I'm seeing and I would be restricted from this system which in theory could be used to look/monitor/backup anything which I host and store without my knowledge.

    The lack of performance is not the issue, the issue is that you are selling private space (i.e. dedicated server) which has a hidden layer that can access everything bellow itself.

    You probly have no issue if you live in apartment building and you have neighbor that constantly listens whetever or not your dog barks when you are at work, but if you replace that wall with a one way glass, which allows your neighbor to monitor your and your dogs life whenever he wishes to look into that way, you won't like it either.

    When something is sold only to you, there should be nobody else in the same space.

    I would never save my beach/high school images and other personal photos/memories into some VPS server. I do save them in dedicated servers though since I would at least know if someone had potentially copied them or ejected the drive.

    I have thought about this as well re: offering VDS vs bare metal. One thing I will say is that either way you're still renting. You still have to trust your provider first and foremost.

    It would definitely be easier for a provider to be sneaky on a VDS, though.

    As a provider, I think the experience is smoother overall with a VDS. There are pros and cons to both, but the experience is significantly more hardware agnostic and flexible for VDSes. Additionally, some things that you can implement on a VDS that you can't do directly on bare-metal (without supporting hardware or talented network admin):

    • Port speed changes: i.e. hook hypervisor up at 10G, offer customer anything you want 100Mbps to 10G
    • Anti-IP hijacking, port isolation all done via hypervisor
    • Bandwidth limitation/auto-suspends or reduced speeds (without having to tie into a switch port)
    • Hardware and physical resource monitoring: i.e. provider can monitor disk/raid health and automatically handle swaps/repairs without relying on hardware failure and/or (angry) customer tickets

    This doesn't take into account all the issues you have with running a variety of hardware. For example, our newest nodes are not compatible with our bare metal KVM console viewer (yet). Not an issue on a VDS.

    Basically it just reduces support/customization overhead for a provider. I can let a level 1 tech mess with port speeds/bandwidth allowances on our VM platform. They are not going to be messing with a Juniper config.

    I think @stefeman is overall correct, though a "VPS" has the exact same issues despite also being 'private space'. Certain users it will definitely be a deal-breaker and the benefits aren't really useful to them, but I don't think it is a problem as long as you aren't selling a VDS as "bare metal" and it's clearly outlined.

  • @crunchbits said:

    @stefeman said:
    I would not consider this just because I know there would be an OS running on top of what I'm seeing and I would be restricted from this system which in theory could be used to look/monitor/backup anything which I host and store without my knowledge.

    The lack of performance is not the issue, the issue is that you are selling private space (i.e. dedicated server) which has a hidden layer that can access everything bellow itself.

    You probly have no issue if you live in apartment building and you have neighbor that constantly listens whetever or not your dog barks when you are at work, but if you replace that wall with a one way glass, which allows your neighbor to monitor your and your dogs life whenever he wishes to look into that way, you won't like it either.

    When something is sold only to you, there should be nobody else in the same space.

    I would never save my beach/high school images and other personal photos/memories into some VPS server. I do save them in dedicated servers though since I would at least know if someone had potentially copied them or ejected the drive.

    I have thought about this as well re: offering VDS vs bare metal. One thing I will say is that either way you're still renting. You still have to trust your provider first and foremost.

    It would definitely be easier for a provider to be sneaky on a VDS, though.

    As a provider, I think the experience is smoother overall with a VDS. There are pros and cons to both, but the experience is significantly more hardware agnostic and flexible for VDSes. Additionally, some things that you can implement on a VDS that you can't do directly on bare-metal (without supporting hardware or talented network admin):

    • Port speed changes: i.e. hook hypervisor up at 10G, offer customer anything you want 100Mbps to 10G
    • Anti-IP hijacking, port isolation all done via hypervisor
    • Bandwidth limitation/auto-suspends or reduced speeds (without having to tie into a switch port)
    • Hardware and physical resource monitoring: i.e. provider can monitor disk/raid health and automatically handle swaps/repairs without relying on hardware failure and/or (angry) customer tickets

    This doesn't take into account all the issues you have with running a variety of hardware. For example, our newest nodes are not compatible with our bare metal KVM console viewer (yet). Not an issue on a VDS.

    Basically it just reduces support/customization overhead for a provider. I can let a level 1 tech mess with port speeds/bandwidth allowances on our VM platform. They are not going to be messing with a Juniper config.

    I think @stefeman is overall correct, though a "VPS" has the exact same issues despite also being 'private space'. Certain users it will definitely be a deal-breaker and the benefits aren't really useful to them, but I don't think it is a problem as long as you aren't selling a VDS as "bare metal" and it's clearly outlined.

    I'm well aware that the provider has multiple ways to monitor even dedicated servers if they want to, or the network. The issue is less about monitoring or suspicion, and more about having a full house for myself, when there's another room and entrance in the top floor where I cannot access myself even if it takes very little space. Its a place where someone could potentially live while I pay for everything.

  • BasToTheMaxBasToTheMax Member, Host Rep

    Will I still be able to run proxmox and host VM's? (nested virtualizing)

  • AbdAbd Member, Patron Provider

    @BasToTheMax said:
    Will I still be able to run proxmox and host VM's? (nested virtualizing)

    yes

  • MrRadicMrRadic Patron Provider, Veteran

    @MannDude said:

    @MrRadic said:
    With current tech, what specifically can't be achieved with just bare metal nowadays?

    Namely how we manage the servers and the feature set that we can offer the end user.

    We resell hardware from our upstreams using our own IP space and have some custom configs they offer us for this purpose, but they're two different providers on two different systems. In one location we have limited API access so users can manage some basic functions from WHMCS like rebooting their server, seeing BW graphs or set their PTR records. In the other location, we can't offer this. If you're a user who has servers in both locations, it may be confusing and require more disclaimers on our end that the two different locations have two different end-user feature sets.

    It's just sort of a pain, that way, and can be confusing to the end-user who may wonder why they can perform XX function in NL, but not that same function in the US, for example.

    The idea behind doing the 'virtual metal' (or whatever) is giving the user a clean UI to manage everything from, a consistent experience regardless of service location, and with more features available to them. This way they can easily reinstall their OS, use a remote console if needed, take a snapshot / backup, see historical graphs, migrate to different hardware later without having to rebuild, etc. Not all of this would be possible otherwise.

    I'll still probably offer 100% bare metal, same cost, but it'll be more of a, "Heres your SSH credentials. Open a ticket if you need us to reboot or reload your OS but don't expect us to want to do this for you a dozen times a month when we gave you the option to manage this yourself for free." type of thing.

    I didn't realize you didn't own/operate. While I understand that this is a work-around for that, you're adding complexity on the user and operational side. Any possible way you can ask the second provider to put something together for you via an API? Or possibly start considering getting into coloing your own hardware?

    What you're doing did briefly exist, but it was riddled with posts with users trying to get help figuring out whether they really did have access to the full hardware or if they were being screwed (the latter happened a few times).

  • If it's again $3 /month deal i am with ya buddy.

  • "Virtual Metal" requires way way more trust than bare metal.
    You CANNOT do FDE properly in a virtualized environment, all keys can be dumped through a Hyper visor(which is significantly easier than "cold boot attacks" or other attacks that can be performed on Bare metal but only under specific circumstances).

    I would stick to offering VPS and VDS(dedicated cores).
    Keep Bare Metal, Bare Metal.

  • I'm not interested in that. With bare metal, I expect full access without any layers in between. Despite the skepticism of minimal performance loss, there are also operating systems (admittedly not many) that don't work with virtualized storage controllers. Furthermore, I'm not comfortable with running nested virtualization and have concerns about unknown drawbacks.

    Additionally, I would have to rely on the provider to properly monitor the hardware. If there was a total data loss due to a hard disk not being replaced in time, I prefer to be responsible myself.

    Overall, I believe that this setup only benefits the provider and not the customer. In my opinion, there's no significant advantage over a regular VPS with dedicated resources. VPS with dedicated resources usually also offer good disk performance.

  • Honestly I like VDI (or back in the day I think people called them Hybrid servers). It's like owning a townhouse. You're out of an apartment, but still not at a whole house.

    But when you own the townhouse then what?

    I think on a technology level, it's much easier on the host to just have one big container on top of the bare metal hardware. You can easily integrate it into your deployment since it's basically the same package. You're still offering on-demand reinstall and management. You can plug it back into your hardware monitoring setup because you can deploy all your code (hard drive degradation, network usage, etc.) just like normal. It's easy.

  • That's an interesting question, and my answer is that if I needed some high-performance use cases, I would think a bare-metal virtualization-based product would be great, so that I don't have to deal with cumbersome and hard-to-use IPMI to control my servers and thus install a lot of stuff easily.

    But on the other hand, if I want to virtualize my own servers and install a virtualization console like PVE, I would hate nested virtualization, even though the performance loss would be limited.

    In fact, I think the question ultimately depends on your users, whether they have some expertise or are casual users, which determines the ultimate preference.

    Thanked by 1yoursunny
  • So... What's people's opinion about hybrid servers then? Virtualized servers but with strict limitations on total amount of capacity with dedicated resources. For example, 4 VMs per server and everyone gets a quarter of the total resources.

  • MannDudeMannDude Host Rep, Veteran

    @BasToTheMax said:
    Will I still be able to run proxmox and host VM's? (nested virtualizing)

    Yup.

    @MrRadic said:

    @MannDude said:

    @MrRadic said:
    With current tech, what specifically can't be achieved with just bare metal nowadays?

    Namely how we manage the servers and the feature set that we can offer the end user.

    We resell hardware from our upstreams using our own IP space and have some custom configs they offer us for this purpose, but they're two different providers on two different systems. In one location we have limited API access so users can manage some basic functions from WHMCS like rebooting their server, seeing BW graphs or set their PTR records. In the other location, we can't offer this. If you're a user who has servers in both locations, it may be confusing and require more disclaimers on our end that the two different locations have two different end-user feature sets.

    It's just sort of a pain, that way, and can be confusing to the end-user who may wonder why they can perform XX function in NL, but not that same function in the US, for example.

    The idea behind doing the 'virtual metal' (or whatever) is giving the user a clean UI to manage everything from, a consistent experience regardless of service location, and with more features available to them. This way they can easily reinstall their OS, use a remote console if needed, take a snapshot / backup, see historical graphs, migrate to different hardware later without having to rebuild, etc. Not all of this would be possible otherwise.

    I'll still probably offer 100% bare metal, same cost, but it'll be more of a, "Heres your SSH credentials. Open a ticket if you need us to reboot or reload your OS but don't expect us to want to do this for you a dozen times a month when we gave you the option to manage this yourself for free." type of thing.

    I didn't realize you didn't own/operate. While I understand that this is a work-around for that, you're adding complexity on the user and operational side. Any possible way you can ask the second provider to put something together for you via an API? Or possibly start considering getting into coloing your own hardware?

    What you're doing did briefly exist, but it was riddled with posts with users trying to get help figuring out whether they really did have access to the full hardware or if they were being screwed (the latter happened a few times).

    Nah, we definitely rent/lease hardware where able. I'm not ashamed to admit publicly that I have no interest in taking in big loans or financing a bunch of hardware at this time. I don't want to be paying interest on six figure loans or the additional manpower costs (emergency remote hands, rack-and-stacks, shipping, build/configuration, etc) of owning the hardware outright. Instead I've just about contacted nearly every dedicated server provider under the sun to see if they can meet our requirements at a reasonable cost and configure the network for us how we want it while aligning with our core business ethos. We own the IPs and ASN at least... With that said, I recognize that it's quickly approaching the point where it probably makes more sense to start mixing in some owned gear where able. One of the requirements when seeking dedicated server providers to us was the ability to also do colo with them, to accommodate additional growth.

    What you're doing did briefly exist, but it was riddled with posts with users trying to get help figuring out whether they really did have access to the full hardware or if they were being screwed (the latter happened a few times).

    I do recall SingleHop (I think) catching some flack some years back over something similar. I think the issue was more misleading marketing. I think they were calling the servers 'bare metal' when in fact they weren't.

    I'm still mulling over some possibilities. We get good deals and some custom configs that aren't available from the upstream direct at great pricing and just exploring a relatively headache free way to offer them to the public. The idea behind adding them as VirtFusion slaves and making a giant KVM VPS per actual bare metal server is mainly end-user convenience, some added and useful features that the setup would provide, and ability to tie it into a backup system that we could offer for an added cost. So it's about 50/50 for the user and their convenience, and for us to manage and ability to integrate with other products (backup, dns in the future, etc). I'm also okay with just handing off login credentials with the disclaimer that there is no remote console, no OS reloading, no remote power control and limited support for it as well.

    @treesmokah said:
    "Virtual Metal" requires way way more trust than bare metal.
    You CANNOT do FDE properly in a virtualized environment, all keys can be dumped through a Hyper visor(which is significantly easier than "cold boot attacks" or other attacks that can be performed on Bare metal but only under specific circumstances).

    I would stick to offering VPS and VDS(dedicated cores).
    Keep Bare Metal, Bare Metal.

    Well, it wouldn't be advertised as 'bare metal', it'd be advertised and marketed as 'Virtual Metal' or whatever. (But with the ability to opt-in to 100% bare metal with limited features and support.)

    For example, we have the following config available soon in the US:

    CPU: E5-2630L v4 (10c/20t) 1.8GHz base/2.9GHz boost
    RAM: 64GB DDR4 ECC
    DISK: (2) 480GB SSD (RAID-1)
    BW: 60TB @ 10Gbps
    PRICE: <$70/mo

    That'd be yours, minus the hypervisor overhead which is minimal and likely, from a performance standpoint, unnoticeable.

    For those who have heightened desires of privacy, full bare metal access would be an option of course. You just wouldn't have many of the features that you may want from a dedicated server that way.

    @a_username said:
    Additionally, I would have to rely on the provider to properly monitor the hardware. If there was a total data loss due to a hard disk not being replaced in time, I prefer to be responsible myself.

    Overall, I believe that this setup only benefits the provider and not the customer. In my opinion, there's no significant advantage over a regular VPS with dedicated resources. VPS with dedicated resources usually also offer good disk performance.

    Good points, however I think it'd be equally as beneficial on both ends due to the available features the end user gets to manage the server. Maybe I've not used enough server providers, but many panels that I have used are pretty dated and limited in their feature sets. Though it would also, of course, benefit the provider in being easier to manage things like assigning additional IPs, migrating to a new server if needed for hardware concerns, and the ability to integrate it with things like off-site snapshots for an added cost to the user.

    I'm still not dead set on one way or another.

    @danblaze said:

    In fact, I think the question ultimately depends on your users, whether they have some expertise or are casual users, which determines the ultimate preference.

    I'm learning from this thread, I think, that most would be cool with it, and those who want the 100% full bare metal to themselves, either due to privacy or performance concerns are probably the same type of people who'd be okay with limited support and features.

    Got a lot to mull over!

    Thanked by 2treesmokah maverick
  • treesmokahtreesmokah Member
    edited February 2023

    @MannDude said: Well, it wouldn't be advertised as 'bare metal', it'd be advertised and marketed as 'Virtual Metal' or whatever. (But with the ability to opt-in to 100% bare metal with limited features and support.)

    I expected that from You, I would never think that You would mislead anyone.

    @MannDude said: For those who have heightened desires of privacy, full bare metal access would be an option of course. You just wouldn't have many of the features that you may want from a dedicated server that way.

    Sounds good.
    However, as a privacy-centric provider - I would offer "trustless" solutions wherever possible. I personally see no point in expanding "virtualized offering" any more at this point.
    VPS and VDS are classics and are here to stay as cheaper/managed products for people with "smaller" threat model.
    Bare metal is classy and for people that need some real protection.

    Regarding owned hardware, I would stick to leased servers for VPS and VDS - generally quicker to solve problems, can be offloaded to the upstream and people that use it are well aware that You can see anything they have going on there.

    For Bare Metal offering, owned hardware would be very nice, we would put most of our trust in you and not Your upstream, there are plenty of hardware backdoors and I believe as a privacy-centric provider You shall consider owning "Bare Metal" hardware.
    ^ that would require more "remote hands" than hypervisors for sure, but its a sacrifice worth considering.

    On another note, I was doing some research in to utilizing Intel SGX or AMD SEV for "truly encrypted" virtualized environments. I believe its a topic worth researching, maybe some day safe FDE would be possible on vps boxes.

    Also, while we are on it; @Francisco mate

    I believe its a bit dishonest to say "allowing you to make sure that your data is safe and only accessible by those that have the encryption passphrase".
    We all know that its not secure at all and does not provide any additional layer of protection.
    The only scenario where I think this could be useful would be preventing someone from recovering data from Shut down system or drives directly.

    Thanked by 1maverick
  • I personally always create a VM on proxmox on my baremetal. All of the overhead is worth to get all the VM-management features

  • MrRadicMrRadic Patron Provider, Veteran

    @MannDude said:

    Nah, we definitely rent/lease hardware where able. I'm not ashamed to admit publicly that I have no interest in taking in big loans or financing a bunch of hardware at this time. I don't want to be paying interest on six figure loans or the additional manpower costs (emergency remote hands, rack-and-stacks, shipping, build/configuration, etc) of owning the hardware outright. Instead I've just about contacted nearly every dedicated server provider under the sun to see if they can meet our requirements at a reasonable cost and configure the network for us how we want it while aligning with our core business ethos. We own the IPs and ASN at least... With that said, I recognize that it's quickly approaching the point where it probably makes more sense to start mixing in some owned gear where able. One of the requirements when seeking dedicated server providers to us was the ability to also do colo with them, to accommodate additional growth.

    Not here to shame you. I respect what you do and how you do it. There's no one right way to do it.

    We started out small, we took advantage of whatever cost savings we could from providers larger than us. As we hit walls, we took creative measures to work around them, just like you're doing now. In all honesty, you're wasting your time with people's opinions here. Innovation doesn't happen based on what others think. If this is the way forward for you, take the chance, see how others have failed and try to make it work the way you imagine it.

    I personally see this as a low risk move that you can easily roll back if it doesn't catch on.

  • jsgjsg Member, Resident Benchmarker
    edited February 2023

    Some thoughts. No affront intended, just straight and honest.

    • you are likely to fail with that. Simple reason: you are not cheap and (at least most of) your customers buy from you for a simple reason, being able to stay anonymous plus some efforts made by you to earn trust and to provide confidentiality. However, "virtual dedi" is pretty much the opposite of what makes people buy from you.
    • "dedi" implies more than dedicated cores, it also implies dedicated disk and dedicated bandwith. The moment you virtualize and "spread" dedi resources over a couple of virtual machines also is the moment when "dedi" goes away and what a customer gets is not "dedi".
    • Unless you want to go really low-end, dedis (as in hardware) should be owned (as in neither loaned nor financed); that may be different for large and very well known players but for your company you should avoid any additional cost (driving up your already not low prices).

    What I do like with your deliberations though is that your company mulls to offer "VDS" - which is exactly what it is, a bog standard VDS. But in your case with the Plus I get from you (anonymous, confidential, ...), That actually aligns well with your company.

    If you (or any provider for that matter) want to offer an extra, something that differentiates you from most other VDS providers and that helps to justify your not exactly low prices, just drive the 'D' in VDS a sensible step further and put multiple SSD arrays (well, 2 pieces, Raid 1) into the hardware - better performance for all customers and a more "dedi" feeling.

    The way I see it (possibly wrong) your strength is to do things a bit different and well and smart. And that is the line you should continue (adding small but interesting features not easy to get just anywhere).

    A VDS (in the common sense) I might buy from you. A "virtual dedi" I would not.

    Thanked by 2MannDude hyperblast
  • HalfEatenPieHalfEatenPie Veteran
    edited February 2023

    @MrRadic said: We started out small, we took advantage of whatever cost savings we could from providers larger than us. As we hit walls, we took creative measures to work around them, just like you're doing now. In all honesty, you're wasting your time with people's opinions here. Innovation doesn't happen based on what others think. If this is the way forward for you, take the chance, see how others have failed and try to make it work the way you imagine it.

    I believe it's halfway in between. These innovations are, imho, not really innovations but rather marginal improvements in the current established and accepted workflows. And that's great. But a market is a two-way street and taking a combination of user feedback + your own internal metrics to determine what the next level is. For example, @jsg's next post is... imho... shows that jsg is not the intended target audience and has different perspectives when it comes to these things. That's ok. He has different priorities when he selects a vendor and it's obvious he's not a target audience member. However, what's useful or insightful here is the reason why customers are responding in certain ways.

    What I see here is people who read "dedicated" and immediately start foaming at their mouth bitching about "it's not privacy if it's virtualization on top".

    Incognet already operates where virtualization is already a risk that's accepted by most customers. They focus on privacy but the privacy-related risks associated with virtualization is already accepted by pretty much all customers because... well... that's what they sell. The people who are bitching and moaning about virtual metal not being "privacy-conscious" is... in my opinion... thinking of it to an extreme and isn't recognizing that the current client-base is already accepting this as a risk for convenience and pricing purposes.

    Ok we want to talk specifically about "privacy" though. You guys remember about William being raided for TOR and how joepie91 had an interview with him?

    Remember this? https://www.zdnet.com/article/austrian-man-raided-for-operating-tor-exit-node/ and https://lowendtalk.com/discussion/6283/raided-for-running-a-tor-exit-accepting-donations-for-legal-expenses/p1 ? raided4tor.cryto.net Wayback: https://web.archive.org/web/20140227192213/http://raided4tor.cryto.net/

    Let's be honest. William went off the deep end after this event kinda fucked over his life. The TLDR imho for that is if shit happens, a server running in a datacenter is probably easy to crack if it's already on, virtualized or not.

    I think the important part here is this: a distinction between bare metal and virtual metal is needed yes. But beyond that, virtualization is already an accepted risk for most if not all of Incognet's client-base. If someone needs the same level of service as a Incognet VPS but with dedicated resources, then I think it makes sense to just sell them that.

    You're literally bitching about convenience vs privacy. Privacy is a partnership though and you, as the user, have to make that decision to set up your server in that way too. @MannDude has built a foundation where the host has taken steps to try and increase privacy measures to the best they can. It's your job to then build on top of it or just stick with that. Who knows, maybe you just like a virtual metal because it's easier to work with anyways.

    Thanked by 2MannDude crunchbits
  • jsgjsg Member, Resident Benchmarker

    @HalfEatenPie

    I agree to quite some extent, incl. the part on me; I'm indeed very unlikely to buy such a product.

    I think however that you are a bit too harsh on the "foaming at their mouth bitching" people here. You are right that @MannDude (who seems to ignore me, oh well ...) already sells virtual servers, so what's the point (and here I disagree) "bitching" about privacy, yada,yada?
    The answer is simple: A VPS is virtual, it's in the name. A dedi though is not virtual - and that's important, even decisive, because of what (potential) customers expect.

    With a virtual machine they expect that, a virtual machine, duh, and hence what comes with that like "everything shared", only a very modest expectation of privacy, etc.
    With a dedi, however, the expectations are very different, and virtualizing it, no matter how nicely packaged and marketing-wrapped, boils down to "it's basically just a large VxS, period", and hence to a very lacking basis in terms of privacy, security, etc.

    Thanked by 1HalfEatenPie
  • MannDudeMannDude Host Rep, Veteran
    edited February 2023

    @jsg said:
    You are right that @MannDude (who seems to ignore me, oh well ...)

    Checked my PMs, I don't see anything from you. Do you happen to have a pending ticket open? Feel free to PM me the details. (Though I do got a bad habit of reading a PM when on mobile, thinking I'll respond later, when at my desk. And then I don't. Not intentional, I just get distracted. :) )

    If you feel ignored, it's certainly not intentional.

    I'll touch base on the other points later. I think there is some confusion because it wouldn't be shared resources. Only you would be assigned to the bare metal server if virtualized, or you could opt-in to just receive SSH credentials and have a true bare metal experience.

    You're right though, it's essentially a giant VPS where you have access to the full host-node and don't share it with anyone else.

    Thanked by 1jsg
  • WebProjectWebProject Host Rep, Veteran
    edited February 2023

    New marketing term and nothing new, as so far is available:

    • VPS
    • VDS
    • Cloud (VPS sold as cloud, true cloud located at least in 6 different geo locations so will be close to any user by fracture is seconds)
    • Instance

    The terms are not available yet:

    • Metal Cloud ©️
    • you above terms

    At the end of the day is virtualisation like KVM or XEN-HVM does the same virtualisation of dedicated hardware or server.

  • I prefer black metal.

    Thanked by 2WebProject MannDude
  • WebProjectWebProject Host Rep, Veteran

    @hyperblast said:
    I prefer black metal.

    You can have any type or colour of metal, your virtualisation your choice of name and terms 😂😂😂

    Personally like Titanium metal 👌

  • @MannDude another option that may be useful in V2 is resource packs/self service - you can essentially assign dedicated hypervisors to users.

    Jon Doe purchases 2 “bare metal servers” (Hypervisors). Those two servers are assigned to him through VF resource groups and limit profiles.

    He can now build VMs to the capacity of those hypervisors.

Sign In or Register to comment.