New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@gsrdgrdghd You bet. Just make sure to route all logging to /dev/null .
why? I have resized the disk on KVM.
Because we do not use LVM for the containers HDDs for performance and security reasons.
Ah Tim, I fully kept your statements in mind when I planned what equipment to sell good sir You made that point to me many moons ago and I made sure to repeat it to the staff that 'East will sell slow, so expect to not see it sold out for a while'.
We're aware, but people want more locations out of us
Thanks for the heads up Timmeh,
Francisco
Please don't tell me you're using QCOW2 images......
Francisco
That's not really fair since we got many people that want to pre-order.
The east coast build is going to be small at just 6 nodes total, 2 KVM and 4 OVZ. It's more a test into the market to see how things look. I'm not going to drop tons of cash on 10 - 15 nodes if it's a very slow market. I expect the users to be less churn than the chinese, but the chinese don't leave us all that much as it is.
Again, some of my comments are based fully off Tim's own input and it all makes sense so I have no reason to question it
Francisco
Probably not an hour. I figure if we sell out all 6 within the month i'll be happy.
Europe is....I dunno. I don't have any contacts over there that I could trust to take care of the hardware.
Francisco
Yea that's the last thing we need.
'Hey man, our 2GBs sell out so fast now that they include 10TB traffic. Boy we push 95% UDP now...wonder why >_>'
Francisco
What about the middle east?
What security reason? Don't use LVM on the HN as RedHat states in their best practices
How is the Middle East the east, it's the Middle East, your own words :P
Asia is comprised (basically) of the Middle East and the Far East, or, collectively... the East.
4 Dual L5520's and a pair of E3 nodes, I doubt it lasts a week, but @Aldryic needs to post a policy on transfers now or bz will drive to Louisiana to kick his ass.
Pff, bz is too drunk to drive
I was thinking just after I hit post, wait, bz doesn't drive, or have a car, one of the two.
I'm not working in the department that handles this, so i'm not sure - I just know it is there, we reported it, and got ignored.
Edit: @DimeCadmium thanks
@William
my feedback: between breathless 5GB on (presumably) low-latency SAS and 10GB on standard server-grade disks, I prefer for life the latter. You may call votes on that.
OK, I really don't get this. Folks sell you something, and it's paid in advance, but hey – it may be down 100% of the time with no refund by legit terms, because there ain't any commitment anyways. I don't get what kind of business this is.
In my case reliability is before money. I can speak for Hetzner, they do have a SLA (only 99% is better than nothing), and even business compensations in case of gross negligence. In 6 months I had something like 3 minutes downtime.
And I can speak for NQHost, I had some 40 minutes downtime in 2 years. NQHost also has amazingly good and fast support.
I am even more confused.
Would you compensate this technical gap by performing the dump and restore of the previous drive? This takes 2 commands and 1h of computer time to you, and saves 2 days of work to the client.
He never said SSD. He specifically said an RPM rating, in fact, which means it's NOT SSD, since SSD's don't spin (thus solid-state)...
And as he's said, SLA's almost never actually apply. For example, http://www.hostgator.com/tos/tos.php#9b they have an exception if you use a third-party monitoring service, it's "at the discretion of" them, maintenance doesn't count (whether planned or emergency), ...
Why?
Our Austrian phone is MO-FR 8-17 into our office, and our International phones are 24/7 to emergency staff.
Please show me how you put a 3,5" drive in a HP DL360 G7/G8... they only have 2,5" bays which limits you to SAS drives OR cheap, slow notebook drives and a maximum of 8x1TB total space which is in RAID10 only 4TB from which the backup storage for the VPS is also to be deducted - With the backup each VPS uses 4x its HDD size (the VPS and 3 backup slots).
How should this work?
The old drive is a file on the HDD, the new drive is a file on the HDD - how should this be merged with your old data? This cannot be done by dd or similar (yes, we tried this.).
http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/2-5/constellation/
These are great 2.5" Enterprise grade SATA III drives
The old drive is a file on the HDD, the new drive is a file on the HDD - how should this be merged with your old data? This cannot be done by dd or similar (yes, we tried this.).
Boot into a rescue CD and rsync the filesystem across, pretty basic stuff, maybe you should leave technical questions to your technical staff.
Running an rsync is no easier for the staff than the client
Never said easy or hard, but it can be done, which is contrary to what @William is stating.
Right, it can be done, but no easier by staff than by client. The OP was specifically saying "you can do it easier than clients can", William was addressing that, he was not addressing whether or not it was possible.
Thanks, i am aware how rsync works.
This is however contrary to what you stated before - you wanted the SAME instance upgraded, which implies same mac address and the same IP, which is not possible with rsync since it requires 2 instances running at the same time with different macs and IPs.
I reffered to upgrading on the fly with resizing the HDD inside the container, not a migration which is entirely different from an upgrade.
Also, you are welcome to do it with rsync, this is in fact what we offer customers - a new KVM where they can migrate to in 7-14 days and then simply switch the IPs around.
HP does not support SATA3, only SATA2 (3Gbit) and SAS2 (6Gbit) - We used constellation 1TB SAS2 drives before, they are however now impossible to get.
It doesn't need to, the drives are backwards compatible.
Francisco
The only reason you wouldn't use LVM is if you're wanting to oversell your space. qcow2 images are sparse files so they only allocate the upper max of the space your VM has ever used (not 100% sure if they shrink as well, maybe qcow2+ does that?).
file based images have 'fake' performance improvements due to some weird caching, but then it completely tanks in other spots to laughable levels.
qcow2 is great if you're in full control of all of the data inside the VM's and you know you aren't ever going to use more than the physical media sizes, allowing you to cheapen on disks. For production though that's just scary.
Francisco
Maybe you should buy better hardware then
do you assign the space of a VM to a partition then?
LVM's, just like 99% of everyone else doing KVM/XEN
Francisco