Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 279
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1276277279281282339

Comments

  • s12321s12321 Member

    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

  • poljikpoljik Member

    @fr33styl3 said:
    FFME005.VIRM.AC disks not available again

    me to, FFME005, 2 vps :
    Booting from Hard Disk…
    Boot failed: could not read the boot disk

  • @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

  • s12321s12321 Member
    edited July 2022

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

  • @s12321 said:

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

    Move from Virmach, shit's a deadpool.

    Thanked by 1Yunen
  • brueggusbrueggus Member, IPv6 Advocate

    @s12321 said:

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

    Get used to it. One of my services is offline for almost three weeks since it was migrated. A second one is offline for a week and counting.

    I got myself a bunch of GreenCloudVPS' new budget VPS, moved/restored my stuff there and called it a day. It's just not worth the hassle and from what I've seen so far I don't expect things to become stable at VirMach anytime soon.

  • alvinalvin Member
    edited July 2022

    @s12321 said:

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

    Don't forget there are more unfortunate than you, their vps dead a month or more ago. o:)

    Thanked by 1mcstudio
  • s12321s12321 Member

    @alvin said:

    @s12321 said:

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

    Don't forget there are more unfortunate than you, their vps dead a month or more ago. o:)

    I saw it, i hope we are all good... :(

  • My main Chicago is back now. it is about 24 hours.
    It is forced migrate to NYC.

    I guess all Chicago will migrate to NYC.

    That's not my expect location.

  • tridinebandimtridinebandim Member
    edited July 2022

    i turned off all four vpses. when things calm down i ll turn them on and continue to idle

  • VirMachVirMach Member, Patron Provider

    @FAT32 said:
    Is it just me that additional IPs are lost after scheduled migration?

    They'll be added back later. We had so many IP issues with the bug where multiple IPs get added that it was impossible to even manually add them back. We'll have to go through our records and either mass auto add them or create a button for requesting them back.

  • VirMachVirMach Member, Patron Provider

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    The driver got a flat tire.

    No, but a ton of these had a host of hardware and network problems, mostly on the old E5 nodes. They may be stable but they were ready to fall apart soon. Some SATA SSDs and RAID controllers were most definitely coming to the end of their life and then we subjected them to copying over the entire disk. Some NICs are definitely going to fail soon on them as well, with weird issues.

    @nightcat said:
    My main Chicago is back now. it is about 24 hours.
    It is forced migrate to NYC.

    I guess all Chicago will migrate to NYC.

    That's not my expect location.

    We'll figure Chicago out soon and send an announcement somewhere. It'll either be that we're getting rid of it and we'll pro-rate refund anyone who really needs that location, or we'll get at least a quarter cabinet somewhere and let those who want to move process that automatically. The latter is more likely.

    Thanked by 1ZA_capetown
  • passwapasswa Member

    @virmach
    Please handle ticket #355723

  • s12321s12321 Member

    @VirMach said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    The driver got a flat tire.

    No, but a ton of these had a host of hardware and network problems, mostly on the old E5 nodes. They may be stable but they were ready to fall apart soon. Some SATA SSDs and RAID controllers were most definitely coming to the end of their life and then we subjected them to copying over the entire disk. Some NICs are definitely going to fail soon on them as well, with weird issues.

    @nightcat said:
    My main Chicago is back now. it is about 24 hours.
    It is forced migrate to NYC.

    I guess all Chicago will migrate to NYC.

    That's not my expect location.

    We'll figure Chicago out soon and send an announcement somewhere. It'll either be that we're getting rid of it and we'll pro-rate refund anyone who really needs that location, or we'll get at least a quarter cabinet somewhere and let those who want to move process that automatically. The latter is more likely.

    Thanks @VirMach
    I hope this is an expected situation from your end, but not an accident. :)

  • RosyorRosyor Member

    @alvin said:

    @s12321 said:

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

    Don't forget there are more unfortunate than you, their vps dead a month or more ago. o:)

    Is it possible to open a ticket for an extra month or something under such circumstances, instead of recieving a refund.

  • taizitaizi Member

    @VirMach said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    The driver got a flat tire.

    No, but a ton of these had a host of hardware and network problems, mostly on the old E5 nodes. They may be stable but they were ready to fall apart soon. Some SATA SSDs and RAID controllers were most definitely coming to the end of their life and then we subjected them to copying over the entire disk. Some NICs are definitely going to fail soon on them as well, with weird issues.

    @nightcat said:
    My main Chicago is back now. it is about 24 hours.
    It is forced migrate to NYC.

    I guess all Chicago will migrate to NYC.

    That's not my expect location.

    We'll figure Chicago out soon and send an announcement somewhere. It'll either be that we're getting rid of it and we'll pro-rate refund anyone who really needs that location, or we'll get at least a quarter cabinet somewhere and let those who want to move process that automatically. The latter is more likely.

    can you add usdt(trc20) and busd(bep20) to coinpayments? your coinpayments gateway is useless since most of the coin can be found in coinbase Commerce, why not just add some stable coins that fast and cheap?

  • BetaMasterBetaMaster Member
    edited July 2022

    @VirMach any news about AMS storage pre-order? Is there any possibility that this order will be fulfilled in near future? Thanks.

  • netrixnetrix Member

    @VirMach said:

    any update about LAXA024?

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

  • _MS__MS_ Member

    @BetaMaster said:
    @VirMach any news about AMS storage pre-order?

    Let him fix all the post-orders first.

  • VirMachVirMach Member, Patron Provider

    @Rosyor said:

    @alvin said:

    @s12321 said:

    @lowendclient said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    They're moving from DC a to DC b, transfers via 1Gbps bandwidth with 10Gbps in total.

    I've underestimated the risks for this migration and one of the hosts is downed more than 24 hours now :cold_sweat:

    Don't forget there are more unfortunate than you, their vps dead a month or more ago. o:)

    Is it possible to open a ticket for an extra month or something under such circumstances, instead of recieving a refund.

    We auto credit customers who are involved in a situation where their service is inaccessible. It's possible once it's resolved we forget, in which case you can create a ticket (once it's resolved) asking for credits, yes.

    @BetaMaster said:
    @VirMach any news about AMS storage pre-order? Is there any possibility that this order will be fulfilled in near future? Thanks.

    It's pretty much ready, I suspect we'll ship it off next week but I'm unsure.

    @netrix said:

    @VirMach said:

    any update about LAXA024?

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    This is online and functional but keeps running into control issues. I'll try to get this resolved again today.

    @taizi said:

    @VirMach said:

    @s12321 said:
    what have you done with the migration? @VirMach

    If you are moving the data from rack a to rack b, should not have more than 24 hours downtime, they are vps images.

    Is this a normal expectation?

    The driver got a flat tire.

    No, but a ton of these had a host of hardware and network problems, mostly on the old E5 nodes. They may be stable but they were ready to fall apart soon. Some SATA SSDs and RAID controllers were most definitely coming to the end of their life and then we subjected them to copying over the entire disk. Some NICs are definitely going to fail soon on them as well, with weird issues.

    @nightcat said:
    My main Chicago is back now. it is about 24 hours.
    It is forced migrate to NYC.

    I guess all Chicago will migrate to NYC.

    That's not my expect location.

    We'll figure Chicago out soon and send an announcement somewhere. It'll either be that we're getting rid of it and we'll pro-rate refund anyone who really needs that location, or we'll get at least a quarter cabinet somewhere and let those who want to move process that automatically. The latter is more likely.

    can you add usdt(trc20) and busd(bep20) to coinpayments? your coinpayments gateway is useless since most of the coin can be found in coinbase Commerce, why not just add some stable coins that fast and cheap?

    I don't really like stablecoins but I'll see. Coinbase Commerce was mainly meant for USD from Coinbase accounts.

    Thanked by 2BetaMaster Rosyor
  • RosyorRosyor Member
    edited July 2022

    @netrix said:

    @VirMach said:

    any update about LAXA024?

    Operation Timed Out After 90001 Milliseconds With 0 Bytes Received

    LAXA024 We are in the same sinking boat, buddy :'(

    Thanked by 1netrix
  • Jake4Jake4 Member

    Any updates for FFME002? Either ETA for resolve or steps required for it to be fixed?

  • FrankZFrankZ Veteran

    I'm sad to say that my trusty San Jose Xeon VM was just migrated to Seattle Node SEAZ008.
    Goodbye old friend. 😭
    Well at least I still have the old IP.

  • taizitaizi Member

    @VirMach said: Coinbase Commerce was mainly meant for USD from Coinbase accounts.

    so don't add cooldown(the waiting for confirmation status) for coinbase Commerce, I still need to wait after I pay with my coinbase balance...

  • soulchiefsoulchief Member
    edited July 2022

    I feel like things are slowly getting back to normal.

    LAXA006 = My server was offline for around 4 days, yesterday I used the reconfigure network tool and it's been up for almost 24 hours now.

    FFME001 = Random reboots seem to be getting less frequent, used to get a reboot every couple hours, now 2-3 times a day (hopefully that's fully resolved soon).

    LAXA024 = Probably needs the reconfigure network tool to be run, but control panel (and server) has been down/unreachable for 5 days.

    I was planning to move over a database to my LAXA006 server, but now I'll delay that for 1-2 months and hope everything is still ok with that one by then. My FFME one was just going to run some scripts so random reboots while annoying, don't really matter for my case but I'm holding off moving them for now. LAXA024 was just an idle server.

  • henixhenix Member

    @VirMach said: once it's resolved we forget

    great business model :#

    Thanked by 1VirMach
  • byebvipbyebvip Member
    edited July 2022

    SJCZ008
    what a bad experience
    Unavailable after migration
    I got a new IP
    But by accessing it, is an nginx default page
    it can't get in touch with me
    My status is always offline
    VNC is also unable to connect. Please help me fix it #214018

    yes i tried to reconfigure the network
    and reinstalled IOS. It didn't do anything

  • Any update for LAKVM9?

  • henixhenix Member

    Long story short:
    Don't buy servers from OLX (cociu)

  • Is anyone on LAXA031 having a network issue?
    I can VNC into the server, but the server seems can not connect to the internet.

    Tried:
    Reconfigure the network, not working
    Edit /etc/network/interface, changing eth0 to ens3, not working
    Edit /etc/network/interface, changing to DHCP, not working

    Main IP pings: false
    Node Online: false
    Service online: online
    Operating System: linux-debian-9-x86_64-minimal-latest
    Service Status:Active
    Registration Date:2021-07-05
    

    I have no idea why it won't work? I have 2 other servers which work fine after reconfigure the network and then rebooting.

This discussion has been closed.